All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 00/17] Unified entry point for other blocks to interact with power
@ 2021-11-30  7:42 Evan Quan
  2021-11-30  7:42 ` [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power Evan Quan
                   ` (17 more replies)
  0 siblings, 18 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

There are several problems with current power implementations:
1. Too many internal details are exposed to other blocks. Thus to interact with
   power, they need to know which power framework is used(powerplay vs swsmu)
   or even whether some API is implemented.
2. A lot of cross callings exist which make it hard to get a whole picture of
   the code hierarchy. And that makes any code change/increment error-prone.
3. Many different types of lock are used. It is calculated there is totally
   13 different locks are used within power. Some of them are even designed for
   the same purpose.

To ease the problems above, this patch series try to
1. provide unified entry point for other blocks to interact with power.
2. relocate some source code piece/headers to avoid cross callings.
3. enforce a unified lock protection on those entry point APIs above.
   That makes the future optimization for unnecessary power locks possible.

Evan Quan (17):
  drm/amd/pm: do not expose implementation details to other blocks out
    of power
  drm/amd/pm: do not expose power implementation details to amdgpu_pm.c
  drm/amd/pm: do not expose power implementation details to display
  drm/amd/pm: do not expose those APIs used internally only in
    amdgpu_dpm.c
  drm/amd/pm: do not expose those APIs used internally only in si_dpm.c
  drm/amd/pm: do not expose the API used internally only in kv_dpm.c
  drm/amd/pm: create a new holder for those APIs used only by legacy
    ASICs(si/kv)
  drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure
  drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations
  drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c
  drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task()
  drm/amd/pm: drop redundant or unused APIs and data structures
  drm/amd/pm: do not expose the smu_context structure used internally in
    power
  drm/amd/pm: relocate the power related headers
  drm/amd/pm: drop unnecessary gfxoff controls
  drm/amd/pm: revise the performance level setting APIs
  drm/amd/pm: unified lock protections in amdgpu_dpm.c

 drivers/gpu/drm/amd/amdgpu/aldebaran.c        |    2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |    7 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |   25 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |    6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |   18 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |    7 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |    5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |    5 +-
 drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |    2 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |    6 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  |  246 +-
 .../gpu/drm/amd/include/kgd_pp_interface.h    |   14 +
 drivers/gpu/drm/amd/pm/Makefile               |   12 +-
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 2435 ++++++++---------
 drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c  |   94 +
 drivers/gpu/drm/amd/pm/amdgpu_pm.c            |  568 ++--
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |  339 +--
 .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h  |   32 +
 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile    |   32 +
 .../pm/{powerplay => legacy-dpm}/cik_dpm.h    |    0
 .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.c |   47 +-
 .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.h |    0
 .../amd/pm/{powerplay => legacy-dpm}/kv_smc.c |    0
 .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c    | 1510 ++++++++++
 .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h    |   71 +
 .../amd/pm/{powerplay => legacy-dpm}/ppsmc.h  |    0
 .../pm/{powerplay => legacy-dpm}/r600_dpm.h   |    0
 .../amd/pm/{powerplay => legacy-dpm}/si_dpm.c |  111 +-
 .../amd/pm/{powerplay => legacy-dpm}/si_dpm.h |    7 +
 .../amd/pm/{powerplay => legacy-dpm}/si_smc.c |    0
 .../{powerplay => legacy-dpm}/sislands_smc.h  |    0
 drivers/gpu/drm/amd/pm/powerplay/Makefile     |    4 -
 .../gpu/drm/amd/pm/powerplay/amd_powerplay.c  |   51 +-
 .../drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c   |   10 +-
 .../pm/{ => powerplay}/inc/amd_powerplay.h    |    0
 .../drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h |    0
 .../amd/pm/{ => powerplay}/inc/fiji_ppsmc.h   |    0
 .../pm/{ => powerplay}/inc/hardwaremanager.h  |    0
 .../drm/amd/pm/{ => powerplay}/inc/hwmgr.h    |    3 -
 .../{ => powerplay}/inc/polaris10_pwrvirus.h  |    0
 .../amd/pm/{ => powerplay}/inc/power_state.h  |    0
 .../drm/amd/pm/{ => powerplay}/inc/pp_debug.h |    0
 .../amd/pm/{ => powerplay}/inc/pp_endian.h    |    0
 .../amd/pm/{ => powerplay}/inc/pp_thermal.h   |    0
 .../amd/pm/{ => powerplay}/inc/ppinterrupt.h  |    0
 .../drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu10.h    |    0
 .../pm/{ => powerplay}/inc/smu10_driver_if.h  |    0
 .../pm/{ => powerplay}/inc/smu11_driver_if.h  |    0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu71.h    |    0
 .../pm/{ => powerplay}/inc/smu71_discrete.h   |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu72.h    |    0
 .../pm/{ => powerplay}/inc/smu72_discrete.h   |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu73.h    |    0
 .../pm/{ => powerplay}/inc/smu73_discrete.h   |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu74.h    |    0
 .../pm/{ => powerplay}/inc/smu74_discrete.h   |    0
 .../drm/amd/pm/{ => powerplay}/inc/smu75.h    |    0
 .../pm/{ => powerplay}/inc/smu75_discrete.h   |    0
 .../amd/pm/{ => powerplay}/inc/smu7_common.h  |    0
 .../pm/{ => powerplay}/inc/smu7_discrete.h    |    0
 .../amd/pm/{ => powerplay}/inc/smu7_fusion.h  |    0
 .../amd/pm/{ => powerplay}/inc/smu7_ppsmc.h   |    0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h |    0
 .../amd/pm/{ => powerplay}/inc/smu8_fusion.h  |    0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h |    0
 .../pm/{ => powerplay}/inc/smu9_driver_if.h   |    0
 .../{ => powerplay}/inc/smu_ucode_xfer_cz.h   |    0
 .../{ => powerplay}/inc/smu_ucode_xfer_vi.h   |    0
 .../drm/amd/pm/{ => powerplay}/inc/smumgr.h   |    0
 .../amd/pm/{ => powerplay}/inc/tonga_ppsmc.h  |    0
 .../amd/pm/{ => powerplay}/inc/vega10_ppsmc.h |    0
 .../inc/vega12/smu9_driver_if.h               |    0
 .../amd/pm/{ => powerplay}/inc/vega12_ppsmc.h |    0
 .../amd/pm/{ => powerplay}/inc/vega20_ppsmc.h |    0
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |   95 +-
 .../amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h  |    0
 .../drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h   |   20 +-
 .../amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h   |    0
 .../inc/smu11_driver_if_arcturus.h            |    0
 .../inc/smu11_driver_if_cyan_skillfish.h      |    0
 .../{ => swsmu}/inc/smu11_driver_if_navi10.h  |    0
 .../inc/smu11_driver_if_sienna_cichlid.h      |    0
 .../{ => swsmu}/inc/smu11_driver_if_vangogh.h |    0
 .../amd/pm/{ => swsmu}/inc/smu12_driver_if.h  |    0
 .../inc/smu13_driver_if_aldebaran.h           |    0
 .../inc/smu13_driver_if_yellow_carp.h         |    0
 .../pm/{ => swsmu}/inc/smu_11_0_cdr_table.h   |    0
 .../drm/amd/pm/{ => swsmu}/inc/smu_types.h    |    0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h    |    0
 .../pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h    |    0
 .../pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h  |    0
 .../amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h  |    0
 .../pm/{ => swsmu}/inc/smu_v11_0_pptable.h    |    0
 .../amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h   |    0
 .../amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h  |    0
 .../amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h   |    0
 .../amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h  |    0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h    |    0
 .../amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h  |    0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h    |    0
 .../amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h |    0
 .../pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h    |    0
 .../pm/{ => swsmu}/inc/smu_v13_0_pptable.h    |    0
 .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |   10 +-
 .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |    9 +-
 .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |   34 +-
 .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |   11 +-
 .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |   10 +-
 .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |   15 +-
 drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h        |    4 +
 114 files changed, 3657 insertions(+), 2671 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c
 create mode 100644 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h
 create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/cik_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.c (99%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_smc.c (100%)
 create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
 create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/r600_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.c (99%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.h (99%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_smc.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/sislands_smc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/amd_powerplay.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/fiji_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hardwaremanager.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hwmgr.h (99%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/polaris10_pwrvirus.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/power_state.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_debug.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_endian.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_thermal.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/ppinterrupt.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu11_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_common.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_fusion.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8_fusion.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_cz.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_vi.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smumgr.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/tonga_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega10_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12/smu9_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega20_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h (98%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_arcturus.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_cyan_skillfish.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_navi10.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_sienna_cichlid.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_vangogh.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu12_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_aldebaran.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_yellow_carp.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_11_0_cdr_table.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_types.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_pptable.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_pptable.h (100%)

-- 
2.29.0


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  8:09   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c Evan Quan
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Those implementation details(whether swsmu supported, some ppt_funcs supported,
accessing internal statistics ...)should be kept internally. It's not a good
practice and even error prone to expose implementation details.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
---
 drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
 drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
 drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
 .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95 +++++++++++++++++++
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
 drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
 13 files changed, 155 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
index bcfdb63b1d42..a545df4efce1 100644
--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
@@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
 	adev->gfx.rlc.funcs->resume(adev);
 
 	/* Wait for FW reset event complete */
-	r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
+	r = amdgpu_dpm_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
 	if (r) {
 		dev_err(adev->dev,
 			"Failed to get response from firmware after reset\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 164d6a9e9fbb..0d1f00b24aae 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, &min_freq, &max_freq);
-		if (ret || val > max_freq || val < min_freq)
-			return -EINVAL;
-		ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK, (uint32_t)val, (uint32_t)val);
-	} else {
-		return 0;
+	ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK, &min_freq, &max_freq);
+	if (ret == -EOPNOTSUPP) {
+		ret = 0;
+		goto out;
 	}
+	if (ret || val > max_freq || val < min_freq) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK, (uint32_t)val, (uint32_t)val);
+	if (ret)
+		ret = -EINVAL;
 
+out:
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
-	if (ret)
-		return -EINVAL;
-
-	return 0;
+	return ret;
 }
 
 DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 1989f9e9379e..41cc1ffb5809 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct amdgpu_device *adev)
 	if (adev->asic_type == CHIP_ARCTURUS &&
 	    amdgpu_passthrough(adev) &&
 	    adev->gmc.xgmi.num_physical_nodes > 1)
-		smu_set_light_sbr(&adev->smu, true);
+		amdgpu_dpm_set_light_sbr(adev, true);
 
 	if (adev->gmc.xgmi.num_physical_nodes > 1) {
 		mutex_lock(&mgpu_info.mutex);
@@ -2857,7 +2857,7 @@ static int amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
 	int i, r;
 
 	if (adev->in_s0ix)
-		amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry);
+		amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D3Entry);
 
 	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
 		if (!adev->ip_blocks[i].status.valid)
@@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
 		return 0;
 
 	if (adev->in_s0ix)
-		amdgpu_gfx_state_change_set(adev, sGpuChangeState_D0Entry);
+		amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D0Entry);
 
 	/* post card */
 	if (amdgpu_device_need_post(adev)) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 1916ec84dd71..3d8f82dc8c97 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct amdgpu_device *adev, uint32_t *value)
 
 	mutex_lock(&adev->gfx.gfx_off_mutex);
 
-	r = smu_get_status_gfxoff(adev, value);
+	r = amdgpu_dpm_get_status_gfxoff(adev, value);
 
 	mutex_unlock(&adev->gfx.gfx_off_mutex);
 
@@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev)
 	}
 	return amdgpu_num_kcq;
 }
-
-/* amdgpu_gfx_state_change_set - Handle gfx power state change set
- * @adev: amdgpu_device pointer
- * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 -sGpuChangeState_D3Entry)
- *
- */
-
-void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum gfx_change_state state)
-{
-	mutex_lock(&adev->pm.mutex);
-	if (adev->powerplay.pp_funcs &&
-	    adev->powerplay.pp_funcs->gfx_state_change_set)
-		((adev)->powerplay.pp_funcs->gfx_state_change_set(
-			(adev)->powerplay.pp_handle, state));
-	mutex_unlock(&adev->pm.mutex);
-}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
index f851196c83a5..776c886fd94a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
@@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
 	AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
 };
 
-/* Argument for PPSMC_MSG_GpuChangeState */
-enum gfx_change_state {
-	sGpuChangeState_D0Entry = 1,
-	sGpuChangeState_D3Entry,
-};
-
 #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
 #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
 
@@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct amdgpu_device *adev,
 uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
 void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v);
 int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev);
-void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum gfx_change_state state);
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index 3c623e589b79..35c4aec04a7e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct amdgpu_device *adev, struct ras_err_d
 	 * choosing right query method according to
 	 * whether smu support query error information
 	 */
-	ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
+	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
 	if (ret == -EOPNOTSUPP) {
 		if (adev->umc.ras_funcs &&
 			adev->umc.ras_funcs->query_ras_error_count)
@@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
 		if (ret)
 			goto free;
 
-		if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
-			adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, con->eeprom_control.ras_num_recs);
+		amdgpu_dpm_send_hbm_bad_pages_num(adev, con->eeprom_control.ras_num_recs);
 	}
 
 #ifdef CONFIG_X86_MCE_AMD
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
index 6e4bea012ea4..5fed26c8db44 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
@@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
 	int ret = 0;
 
 	kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
-	ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
+	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
 	if (ret == -EOPNOTSUPP) {
 		if (adev->umc.ras_funcs &&
 		    adev->umc.ras_funcs->query_ras_error_count)
@@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
 						err_data->err_addr_cnt);
 			amdgpu_ras_save_bad_pages(adev);
 
-			if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
-				adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, con->eeprom_control.ras_num_recs);
+			amdgpu_dpm_send_hbm_bad_pages_num(adev, con->eeprom_control.ras_num_recs);
 		}
 
 		amdgpu_ras_reset_gpu(adev);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
index deae12dc777d..329a4c89f1e6 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
@@ -222,7 +222,7 @@ void kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
 
 	len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
 		       KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
-		       atomic64_read(&dev->adev->smu.throttle_int_counter));
+		       amdgpu_dpm_get_thermal_throttling_counter(dev->adev));
 
 	add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,	fifo_in, len);
 }
diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index 5c0867ebcfce..2e295facd086 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -26,6 +26,10 @@
 
 extern const struct amdgpu_ip_block_version pp_smu_ip_block;
 
+enum smu_event_type {
+	SMU_EVENT_RESET_COMPLETE = 0,
+};
+
 struct amd_vce_state {
 	/* vce clocks */
 	u32 evclk;
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 08362d506534..9b332c8a0079 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_versio
 
 	return 0;
 }
+
+int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
+{
+	return smu_set_light_sbr(&adev->smu, enable);
+}
+
+int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
+{
+	int ret = 0;
+
+	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
+		ret = adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, size);
+
+	return ret;
+}
+
+int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  uint32_t *min,
+				  uint32_t *max)
+{
+	if (!is_support_sw_smu(adev))
+		return -EOPNOTSUPP;
+
+	switch (type) {
+	case PP_SCLK:
+		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, min, max);
+	default:
+		return -EINVAL;
+	}
+}
+
+int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
+				   enum pp_clock_type type,
+				   uint32_t min,
+				   uint32_t max)
+{
+	if (!is_support_sw_smu(adev))
+		return -EOPNOTSUPP;
+
+	switch (type) {
+	case PP_SCLK:
+		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK, min, max);
+	default:
+		return -EINVAL;
+	}
+}
+
+int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
+			      enum smu_event_type event,
+			      uint64_t event_arg)
+{
+	if (!is_support_sw_smu(adev))
+		return -EOPNOTSUPP;
+
+	return smu_wait_for_event(&adev->smu, event, event_arg);
+}
+
+int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
+{
+	if (!is_support_sw_smu(adev))
+		return -EOPNOTSUPP;
+
+	return smu_get_status_gfxoff(&adev->smu, value);
+}
+
+uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev)
+{
+	return atomic64_read(&adev->smu.throttle_int_counter);
+}
+
+/* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
+ * @adev: amdgpu_device pointer
+ * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 -sGpuChangeState_D3Entry)
+ *
+ */
+void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
+				 enum gfx_change_state state)
+{
+	mutex_lock(&adev->pm.mutex);
+	if (adev->powerplay.pp_funcs &&
+	    adev->powerplay.pp_funcs->gfx_state_change_set)
+		((adev)->powerplay.pp_funcs->gfx_state_change_set(
+			(adev)->powerplay.pp_handle, state));
+	mutex_unlock(&adev->pm.mutex);
+}
+
+int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
+			    void *umc_ecc)
+{
+	if (!is_support_sw_smu(adev))
+		return -EOPNOTSUPP;
+
+	return smu_get_ecc_info(&adev->smu, umc_ecc);
+}
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 16e3f72d31b9..7289d379a9fb 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -23,6 +23,12 @@
 #ifndef __AMDGPU_DPM_H__
 #define __AMDGPU_DPM_H__
 
+/* Argument for PPSMC_MSG_GpuChangeState */
+enum gfx_change_state {
+	sGpuChangeState_D0Entry = 1,
+	sGpuChangeState_D3Entry,
+};
+
 enum amdgpu_int_thermal_type {
 	THERMAL_TYPE_NONE,
 	THERMAL_TYPE_EXTERNAL,
@@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
 void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
 int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version);
-
+int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable);
+int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size);
+int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
+				       enum pp_clock_type type,
+				       uint32_t *min,
+				       uint32_t *max);
+int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
+				        enum pp_clock_type type,
+				        uint32_t min,
+				        uint32_t max);
+int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
+		       uint64_t event_arg);
+int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
+uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev);
+void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
+				 enum gfx_change_state state);
+int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
+			    void *umc_ecc);
 #endif
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
index f738f7dc20c9..29791bb21fba 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
@@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
 	uint32_t clk_dependency;
 };
 
-enum smu_event_type {
-
-	SMU_EVENT_RESET_COMPLETE = 0,
-};
-
 #define SMU_TABLE_INIT(tables, table_id, s, a, d)	\
 	do {						\
 		tables[table_id].size = s;		\
@@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context *smu);
 
 int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
 
-int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
+int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value);
 
 int smu_set_light_sbr(struct smu_context *smu, bool enable);
 
-int smu_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
+int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
 		       uint64_t event_arg);
 int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
 int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t size);
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 5839918cb574..ef7d0e377965 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void *handle,
 	return ret;
 }
 
-int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
+int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
 {
-	int ret = 0;
-	struct smu_context *smu = &adev->smu;
+	if (!smu->ppt_funcs->get_gfx_off_status)
+		return -EINVAL;
 
-	if (is_support_sw_smu(adev) && smu->ppt_funcs->get_gfx_off_status)
-		*value = smu_get_gfx_off_status(smu);
-	else
-		ret = -EINVAL;
+	*value = smu_get_gfx_off_status(smu);
 
-	return ret;
+	return 0;
 }
 
 int smu_set_soft_freq_range(struct smu_context *smu,
@@ -3167,11 +3164,10 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
 	.get_smu_prv_buf_details = smu_get_prv_buffer_details,
 };
 
-int smu_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
+int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
 		       uint64_t event_arg)
 {
 	int ret = -EINVAL;
-	struct smu_context *smu = &adev->smu;
 
 	if (smu->ppt_funcs->wait_for_event) {
 		mutex_lock(&smu->mutex);
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
  2021-11-30  7:42 ` [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 13:04   ` Chen, Guchun
  2021-11-30  7:42 ` [PATCH V2 03/17] drm/amd/pm: do not expose power implementation details to display Evan Quan
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

amdgpu_pm.c holds all the user sysfs/hwmon interfaces. It's another
client of our power APIs. It's not proper to spike into power
implementation details there.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I397853ddb13eacfce841366de2a623535422df9a
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 458 ++++++++++++++++++-
 drivers/gpu/drm/amd/pm/amdgpu_pm.c        | 519 ++++++++--------------
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 160 +++----
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c |   3 -
 4 files changed, 709 insertions(+), 431 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 9b332c8a0079..3c59f16c7a6f 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -1453,7 +1453,9 @@ static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 	if (equal)
 		return;
 
-	amdgpu_dpm_set_power_state(adev);
+	if (adev->powerplay.pp_funcs->set_power_state)
+		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
+
 	amdgpu_dpm_post_set_power_state(adev);
 
 	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
@@ -1709,3 +1711,457 @@ int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
 
 	return smu_get_ecc_info(&adev->smu, umc_ecc);
 }
+
+struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
+						     uint32_t idx)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_vce_clock_state)
+		return NULL;
+
+	return pp_funcs->get_vce_clock_state(adev->powerplay.pp_handle,
+					     idx);
+}
+
+void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev,
+					enum amd_pm_state_type *state)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_current_power_state) {
+		*state = adev->pm.dpm.user_state;
+		return;
+	}
+
+	*state = pp_funcs->get_current_power_state(adev->powerplay.pp_handle);
+	if (*state < POWER_STATE_TYPE_DEFAULT ||
+	    *state > POWER_STATE_TYPE_INTERNAL_3DPERF)
+		*state = adev->pm.dpm.user_state;
+
+	return;
+}
+
+void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
+				enum amd_pm_state_type state)
+{
+	adev->pm.dpm.user_state = state;
+
+	if (adev->powerplay.pp_funcs->dispatch_tasks)
+		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
+	else
+		amdgpu_pm_compute_clocks(adev);
+}
+
+enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	enum amd_dpm_forced_level level;
+
+	if (pp_funcs->get_performance_level)
+		level = pp_funcs->get_performance_level(adev->powerplay.pp_handle);
+	else
+		level = adev->pm.dpm.forced_level;
+
+	return level;
+}
+
+int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
+				       enum amd_dpm_forced_level level)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (pp_funcs->force_performance_level) {
+		if (adev->pm.dpm.thermal_active)
+			return -EINVAL;
+
+		if (pp_funcs->force_performance_level(adev->powerplay.pp_handle,
+						      level))
+			return -EINVAL;
+	}
+
+	adev->pm.dpm.forced_level = level;
+
+	return 0;
+}
+
+int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
+				 struct pp_states_info *states)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_pp_num_states)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_pp_num_states(adev->powerplay.pp_handle, states);
+}
+
+int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
+			      enum amd_pp_task task_id,
+			      enum amd_pm_state_type *user_state)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->dispatch_tasks)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->dispatch_tasks(adev->powerplay.pp_handle, task_id, user_state);
+}
+
+int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_pp_table)
+		return 0;
+
+	return pp_funcs->get_pp_table(adev->powerplay.pp_handle, table);
+}
+
+int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
+				      uint32_t type,
+				      long *input,
+				      uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fine_grain_clk_vol)
+		return 0;
+
+	return pp_funcs->set_fine_grain_clk_vol(adev->powerplay.pp_handle,
+						type,
+						input,
+						size);
+}
+
+int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
+				  uint32_t type,
+				  long *input,
+				  uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->odn_edit_dpm_table)
+		return 0;
+
+	return pp_funcs->odn_edit_dpm_table(adev->powerplay.pp_handle,
+					    type,
+					    input,
+					    size);
+}
+
+int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->print_clock_levels)
+		return 0;
+
+	return pp_funcs->print_clock_levels(adev->powerplay.pp_handle,
+					    type,
+					    buf);
+}
+
+int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
+				    uint64_t ppfeature_masks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_ppfeature_status)
+		return 0;
+
+	return pp_funcs->set_ppfeature_status(adev->powerplay.pp_handle,
+					      ppfeature_masks);
+}
+
+int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_ppfeature_status)
+		return 0;
+
+	return pp_funcs->get_ppfeature_status(adev->powerplay.pp_handle,
+					      buf);
+}
+
+int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
+				 enum pp_clock_type type,
+				 uint32_t mask)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->force_clock_level)
+		return 0;
+
+	return pp_funcs->force_clock_level(adev->powerplay.pp_handle,
+					   type,
+					   mask);
+}
+
+int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_sclk_od)
+		return 0;
+
+	return pp_funcs->get_sclk_od(adev->powerplay.pp_handle);
+}
+
+int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_sclk_od)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
+
+	if (amdgpu_dpm_dispatch_task(adev,
+				     AMD_PP_TASK_READJUST_POWER_STATE,
+				     NULL) == -EOPNOTSUPP) {
+		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
+		amdgpu_pm_compute_clocks(adev);
+	}
+
+	return 0;
+}
+
+int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_mclk_od)
+		return 0;
+
+	return pp_funcs->get_mclk_od(adev->powerplay.pp_handle);
+}
+
+int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_mclk_od)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
+
+	if (amdgpu_dpm_dispatch_task(adev,
+				     AMD_PP_TASK_READJUST_POWER_STATE,
+				     NULL) == -EOPNOTSUPP) {
+		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
+		amdgpu_pm_compute_clocks(adev);
+	}
+
+	return 0;
+}
+
+int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
+				      char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_power_profile_mode)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_power_profile_mode(adev->powerplay.pp_handle,
+						buf);
+}
+
+int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
+				      long *input, uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_power_profile_mode)
+		return 0;
+
+	return pp_funcs->set_power_profile_mode(adev->powerplay.pp_handle,
+						input,
+						size);
+}
+
+int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void **table)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_gpu_metrics)
+		return 0;
+
+	return pp_funcs->get_gpu_metrics(adev->powerplay.pp_handle, table);
+}
+
+int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t *fan_mode)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_control_mode)
+		return -EOPNOTSUPP;
+
+	*fan_mode = pp_funcs->get_fan_control_mode(adev->powerplay.pp_handle);
+
+	return 0;
+}
+
+int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_speed_pwm)
+		return -EINVAL;
+
+	return pp_funcs->set_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t *speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_speed_pwm)
+		return -EINVAL;
+
+	return pp_funcs->get_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t *speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_speed_rpm)
+		return -EINVAL;
+
+	return pp_funcs->get_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_speed_rpm)
+		return -EINVAL;
+
+	return pp_funcs->set_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t mode)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_control_mode)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_fan_control_mode(adev->powerplay.pp_handle, mode);
+
+	return 0;
+}
+
+int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
+			       uint32_t *limit,
+			       enum pp_power_limit_level pp_limit_level,
+			       enum pp_power_type power_type)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_power_limit)
+		return -ENODATA;
+
+	return pp_funcs->get_power_limit(adev->powerplay.pp_handle,
+					 limit,
+					 pp_limit_level,
+					 power_type);
+}
+
+int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
+			       uint32_t limit)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_power_limit)
+		return -EINVAL;
+
+	return pp_funcs->set_power_limit(adev->powerplay.pp_handle, limit);
+}
+
+int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev)
+{
+	if (!is_support_sw_smu(adev))
+		return false;
+
+	return is_support_cclk_dpm(adev);
+}
+
+int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
+						       struct seq_file *m)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->debugfs_print_current_performance_level)
+		return -EOPNOTSUPP;
+
+	pp_funcs->debugfs_print_current_performance_level(adev->powerplay.pp_handle,
+							  m);
+
+	return 0;
+}
+
+int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+				       void **addr,
+				       size_t *size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_smu_prv_buf_details)
+		return -ENOSYS;
+
+	return pp_funcs->get_smu_prv_buf_details(adev->powerplay.pp_handle,
+						 addr,
+						 size);
+}
+
+int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
+{
+	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+
+	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
+	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
+		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
+		return true;
+
+	return false;
+}
+
+int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+			    const char *buf,
+			    size_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_pp_table)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->set_pp_table(adev->powerplay.pp_handle,
+				      buf,
+				      size);
+}
+
+int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
+{
+	return adev->smu.cpu_core_num;
+}
+
+void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
+{
+	if (!is_support_sw_smu(adev))
+		return;
+
+	amdgpu_smu_stb_debug_fs_init(adev);
+}
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index 082539c70fd4..3382d30b5d90 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -34,7 +34,6 @@
 #include <linux/nospec.h>
 #include <linux/pm_runtime.h>
 #include <asm/processor.h>
-#include "hwmgr.h"
 
 static const struct cg_flag_name clocks[] = {
 	{AMD_CG_SUPPORT_GFX_FGCG, "Graphics Fine Grain Clock Gating"},
@@ -132,7 +131,6 @@ static ssize_t amdgpu_get_power_dpm_state(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum amd_pm_state_type pm;
 	int ret;
 
@@ -147,11 +145,7 @@ static ssize_t amdgpu_get_power_dpm_state(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_current_power_state) {
-		pm = amdgpu_dpm_get_current_power_state(adev);
-	} else {
-		pm = adev->pm.dpm.user_state;
-	}
+	amdgpu_dpm_get_current_power_state(adev, &pm);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -191,19 +185,8 @@ static ssize_t amdgpu_set_power_dpm_state(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		mutex_lock(&adev->pm.mutex);
-		adev->pm.dpm.user_state = state;
-		mutex_unlock(&adev->pm.mutex);
-	} else if (adev->powerplay.pp_funcs->dispatch_tasks) {
-		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
-	} else {
-		mutex_lock(&adev->pm.mutex);
-		adev->pm.dpm.user_state = state;
-		mutex_unlock(&adev->pm.mutex);
+	amdgpu_dpm_set_power_state(adev, state);
 
-		amdgpu_pm_compute_clocks(adev);
-	}
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -290,10 +273,7 @@ static ssize_t amdgpu_get_power_dpm_force_performance_level(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_performance_level)
-		level = amdgpu_dpm_get_performance_level(adev);
-	else
-		level = adev->pm.dpm.forced_level;
+	level = amdgpu_dpm_get_performance_level(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -318,7 +298,6 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum amd_dpm_forced_level level;
 	enum amd_dpm_forced_level current_level;
 	int ret = 0;
@@ -358,11 +337,7 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_performance_level)
-		current_level = amdgpu_dpm_get_performance_level(adev);
-	else
-		current_level = adev->pm.dpm.forced_level;
-
+	current_level = amdgpu_dpm_get_performance_level(adev);
 	if (current_level == level) {
 		pm_runtime_mark_last_busy(ddev->dev);
 		pm_runtime_put_autosuspend(ddev->dev);
@@ -390,25 +365,12 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 		return -EINVAL;
 	}
 
-	if (pp_funcs->force_performance_level) {
-		mutex_lock(&adev->pm.mutex);
-		if (adev->pm.dpm.thermal_active) {
-			mutex_unlock(&adev->pm.mutex);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-		ret = amdgpu_dpm_force_performance_level(adev, level);
-		if (ret) {
-			mutex_unlock(&adev->pm.mutex);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		} else {
-			adev->pm.dpm.forced_level = level;
-		}
-		mutex_unlock(&adev->pm.mutex);
+	if (amdgpu_dpm_force_performance_level(adev, level)) {
+		pm_runtime_mark_last_busy(ddev->dev);
+		pm_runtime_put_autosuspend(ddev->dev);
+		return -EINVAL;
 	}
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -421,7 +383,6 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_states_info data;
 	uint32_t i;
 	int buf_len, ret;
@@ -437,11 +398,8 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_pp_num_states) {
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-	} else {
+	if (amdgpu_dpm_get_pp_num_states(adev, &data))
 		memset(&data, 0, sizeof(data));
-	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -463,7 +421,6 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_states_info data = {0};
 	enum amd_pm_state_type pm = 0;
 	int i = 0, ret = 0;
@@ -479,15 +436,16 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_current_power_state
-		 && pp_funcs->get_pp_num_states) {
-		pm = amdgpu_dpm_get_current_power_state(adev);
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-	}
+	amdgpu_dpm_get_current_power_state(adev, &pm);
+
+	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return ret;
+
 	for (i = 0; i < data.nums; i++) {
 		if (pm == data.states[i])
 			break;
@@ -525,6 +483,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
 	enum amd_pm_state_type state = 0;
+	struct pp_states_info data;
 	unsigned long idx;
 	int ret;
 
@@ -533,41 +492,49 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if (strlen(buf) == 1)
-		adev->pp_force_state_enabled = false;
-	else if (is_support_sw_smu(adev))
-		adev->pp_force_state_enabled = false;
-	else if (adev->powerplay.pp_funcs->dispatch_tasks &&
-			adev->powerplay.pp_funcs->get_pp_num_states) {
-		struct pp_states_info data;
-
-		ret = kstrtoul(buf, 0, &idx);
-		if (ret || idx >= ARRAY_SIZE(data.states))
-			return -EINVAL;
+	adev->pp_force_state_enabled = false;
 
-		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+	if (strlen(buf) == 1)
+		return count;
 
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-		state = data.states[idx];
+	ret = kstrtoul(buf, 0, &idx);
+	if (ret || idx >= ARRAY_SIZE(data.states))
+		return -EINVAL;
 
-		ret = pm_runtime_get_sync(ddev->dev);
-		if (ret < 0) {
-			pm_runtime_put_autosuspend(ddev->dev);
-			return ret;
-		}
+	idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
 
-		/* only set user selected power states */
-		if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
-		    state != POWER_STATE_TYPE_DEFAULT) {
-			amdgpu_dpm_dispatch_task(adev,
-					AMD_PP_TASK_ENABLE_USER_STATE, &state);
-			adev->pp_force_state_enabled = true;
-		}
-		pm_runtime_mark_last_busy(ddev->dev);
+	ret = pm_runtime_get_sync(ddev->dev);
+	if (ret < 0) {
 		pm_runtime_put_autosuspend(ddev->dev);
+		return ret;
+	}
+
+	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
+	if (ret)
+		goto err_out;
+
+	state = data.states[idx];
+
+	/* only set user selected power states */
+	if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
+	    state != POWER_STATE_TYPE_DEFAULT) {
+		ret = amdgpu_dpm_dispatch_task(adev,
+				AMD_PP_TASK_ENABLE_USER_STATE, &state);
+		if (ret)
+			goto err_out;
+
+		adev->pp_force_state_enabled = true;
 	}
 
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+
 	return count;
+
+err_out:
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+	return ret;
 }
 
 /**
@@ -601,17 +568,13 @@ static ssize_t amdgpu_get_pp_table(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_pp_table) {
-		size = amdgpu_dpm_get_pp_table(adev, &table);
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		if (size < 0)
-			return size;
-	} else {
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		return 0;
-	}
+	size = amdgpu_dpm_get_pp_table(adev, &table);
+
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+
+	if (size <= 0)
+		return size;
 
 	if (size >= PAGE_SIZE)
 		size = PAGE_SIZE - 1;
@@ -642,15 +605,13 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,
 	}
 
 	ret = amdgpu_dpm_set_pp_table(adev, buf, count);
-	if (ret) {
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		return ret;
-	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return ret;
+
 	return count;
 }
 
@@ -866,46 +827,32 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_fine_grain_clk_vol) {
-		ret = amdgpu_dpm_set_fine_grain_clk_vol(adev, type,
-							parameter,
-							parameter_size);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	if (amdgpu_dpm_set_fine_grain_clk_vol(adev,
+					      type,
+					      parameter,
+					      parameter_size))
+		goto err_out;
 
-	if (adev->powerplay.pp_funcs->odn_edit_dpm_table) {
-		ret = amdgpu_dpm_odn_edit_dpm_table(adev, type,
-						    parameter, parameter_size);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	if (amdgpu_dpm_odn_edit_dpm_table(adev, type,
+					  parameter, parameter_size))
+		goto err_out;
 
 	if (type == PP_OD_COMMIT_DPM_TABLE) {
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev,
-						 AMD_PP_TASK_READJUST_POWER_STATE,
-						 NULL);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return count;
-		} else {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
+		if (amdgpu_dpm_dispatch_task(adev,
+					     AMD_PP_TASK_READJUST_POWER_STATE,
+					     NULL))
+			goto err_out;
 	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
 	return count;
+
+err_out:
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+	return -EINVAL;
 }
 
 static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
@@ -928,8 +875,8 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->print_clock_levels) {
-		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+	size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+	if (size > 0) {
 		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf+size);
 		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf+size);
 		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf+size);
@@ -985,17 +932,14 @@ static ssize_t amdgpu_set_pp_features(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_ppfeature_status) {
-		ret = amdgpu_dpm_set_ppfeature_status(adev, featuremask);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	ret = amdgpu_dpm_set_ppfeature_status(adev, featuremask);
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -1019,9 +963,8 @@ static ssize_t amdgpu_get_pp_features(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_ppfeature_status)
-		size = amdgpu_dpm_get_ppfeature_status(adev, buf);
-	else
+	size = amdgpu_dpm_get_ppfeature_status(adev, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1080,9 +1023,8 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->print_clock_levels)
-		size = amdgpu_dpm_print_clock_levels(adev, type, buf);
-	else
+	size = amdgpu_dpm_print_clock_levels(adev, type, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1151,10 +1093,7 @@ static ssize_t amdgpu_set_pp_dpm_clock(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->force_clock_level)
-		ret = amdgpu_dpm_force_clock_level(adev, type, mask);
-	else
-		ret = 0;
+	ret = amdgpu_dpm_force_clock_level(adev, type, mask);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1305,10 +1244,7 @@ static ssize_t amdgpu_get_pp_sclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev))
-		value = 0;
-	else if (adev->powerplay.pp_funcs->get_sclk_od)
-		value = amdgpu_dpm_get_sclk_od(adev);
+	value = amdgpu_dpm_get_sclk_od(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1342,19 +1278,7 @@ static ssize_t amdgpu_set_pp_sclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		value = 0;
-	} else {
-		if (adev->powerplay.pp_funcs->set_sclk_od)
-			amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
-
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_READJUST_POWER_STATE, NULL);
-		} else {
-			adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-			amdgpu_pm_compute_clocks(adev);
-		}
-	}
+	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1382,10 +1306,7 @@ static ssize_t amdgpu_get_pp_mclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev))
-		value = 0;
-	else if (adev->powerplay.pp_funcs->get_mclk_od)
-		value = amdgpu_dpm_get_mclk_od(adev);
+	value = amdgpu_dpm_get_mclk_od(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1419,19 +1340,7 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		value = 0;
-	} else {
-		if (adev->powerplay.pp_funcs->set_mclk_od)
-			amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
-
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_READJUST_POWER_STATE, NULL);
-		} else {
-			adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-			amdgpu_pm_compute_clocks(adev);
-		}
-	}
+	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1479,9 +1388,8 @@ static ssize_t amdgpu_get_pp_power_profile_mode(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_power_profile_mode)
-		size = amdgpu_dpm_get_power_profile_mode(adev, buf);
-	else
+	size = amdgpu_dpm_get_power_profile_mode(adev, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1545,8 +1453,7 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_power_profile_mode)
-		ret = amdgpu_dpm_set_power_profile_mode(adev, parameter, parameter_size);
+	ret = amdgpu_dpm_set_power_profile_mode(adev, parameter, parameter_size);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1812,9 +1719,7 @@ static ssize_t amdgpu_get_gpu_metrics(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_gpu_metrics)
-		size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
-
+	size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
 	if (size <= 0)
 		goto out;
 
@@ -2053,7 +1958,6 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 {
 	struct device_attribute *dev_attr = &attr->dev_attr;
 	const char *attr_name = dev_attr->attr.name;
-	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
 	enum amd_asic_type asic_type = adev->asic_type;
 
 	if (!(attr->flags & mask)) {
@@ -2076,9 +1980,7 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 			*states = ATTR_STATE_UNSUPPORTED;
 	} else if (DEVICE_ATTR_IS(pp_od_clk_voltage)) {
 		*states = ATTR_STATE_UNSUPPORTED;
-		if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
-		    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
-			(!is_support_sw_smu(adev) && hwmgr->od_enabled))
+		if (amdgpu_dpm_is_overdrive_supported(adev))
 			*states = ATTR_STATE_SUPPORTED;
 	} else if (DEVICE_ATTR_IS(mem_busy_percent)) {
 		if (adev->flags & AMD_IS_APU || asic_type == CHIP_VEGA10)
@@ -2105,8 +2007,7 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 		if (!(asic_type == CHIP_VANGOGH || asic_type == CHIP_SIENNA_CICHLID))
 			*states = ATTR_STATE_UNSUPPORTED;
 	} else if (DEVICE_ATTR_IS(pp_power_profile_mode)) {
-		if (!adev->powerplay.pp_funcs->get_power_profile_mode ||
-		    amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
+		if (amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
 			*states = ATTR_STATE_UNSUPPORTED;
 	}
 
@@ -2389,17 +2290,14 @@ static ssize_t amdgpu_hwmon_get_pwm1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return sysfs_emit(buf, "%u\n", pwm_mode);
 }
 
@@ -2427,17 +2325,14 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	amdgpu_dpm_set_fan_control_mode(adev, value);
+	ret = amdgpu_dpm_set_fan_control_mode(adev, value);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -2469,32 +2364,29 @@ static ssize_t amdgpu_hwmon_set_pwm1(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
+	err = kstrtou32(buf, 10, &value);
+	if (err)
+		return err;
+
 	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (err < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return err;
 	}
 
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
+	if (err)
+		goto out;
+
 	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
 		pr_info("manual fan speed control should be enabled first\n");
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
+		err = -EINVAL;
+		goto out;
 	}
 
-	err = kstrtou32(buf, 10, &value);
-	if (err) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return err;
-	}
-
-	if (adev->powerplay.pp_funcs->set_fan_speed_pwm)
-		err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
 
+out:
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
@@ -2523,10 +2415,7 @@ static ssize_t amdgpu_hwmon_get_pwm1(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_pwm)
-		err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2556,10 +2445,7 @@ static ssize_t amdgpu_hwmon_get_fan1_input(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
-		err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2653,10 +2539,7 @@ static ssize_t amdgpu_hwmon_get_fan1_target(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
-		err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2681,32 +2564,28 @@ static ssize_t amdgpu_hwmon_set_fan1_target(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
+	err = kstrtou32(buf, 10, &value);
+	if (err)
+		return err;
+
 	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (err < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return err;
 	}
 
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
+	if (err)
+		goto out;
 
 	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -ENODATA;
-	}
-
-	err = kstrtou32(buf, 10, &value);
-	if (err) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return err;
+		err = -ENODATA;
+		goto out;
 	}
 
-	if (adev->powerplay.pp_funcs->set_fan_speed_rpm)
-		err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
 
+out:
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
@@ -2735,17 +2614,14 @@ static ssize_t amdgpu_hwmon_get_fan1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return sysfs_emit(buf, "%i\n", pwm_mode == AMD_FAN_CTRL_AUTO ? 0 : 1);
 }
 
@@ -2781,16 +2657,14 @@ static ssize_t amdgpu_hwmon_set_fan1_enable(struct device *dev,
 		return err;
 	}
 
-	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-	amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
+	err = amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (err)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -2926,7 +2800,6 @@ static ssize_t amdgpu_hwmon_show_power_cap_generic(struct device *dev,
 					enum pp_power_limit_level pp_limit_level)
 {
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum pp_power_type power_type = to_sensor_dev_attr(attr)->index;
 	uint32_t limit;
 	ssize_t size;
@@ -2937,16 +2810,13 @@ static ssize_t amdgpu_hwmon_show_power_cap_generic(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if ( !(pp_funcs && pp_funcs->get_power_limit))
-		return -ENODATA;
-
 	r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (r < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return r;
 	}
 
-	r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
+	r = amdgpu_dpm_get_power_limit(adev, &limit,
 				      pp_limit_level, power_type);
 
 	if (!r)
@@ -3001,7 +2871,6 @@ static ssize_t amdgpu_hwmon_set_power_cap(struct device *dev,
 		size_t count)
 {
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	int limit_type = to_sensor_dev_attr(attr)->index;
 	int err;
 	u32 value;
@@ -3027,10 +2896,7 @@ static ssize_t amdgpu_hwmon_set_power_cap(struct device *dev,
 		return err;
 	}
 
-	if (pp_funcs && pp_funcs->set_power_limit)
-		err = pp_funcs->set_power_limit(adev->powerplay.pp_handle, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_power_limit(adev, value);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -3303,6 +3169,7 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	struct device *dev = kobj_to_dev(kobj);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	umode_t effective_mode = attr->mode;
+	uint32_t speed = 0;
 
 	/* under multi-vf mode, the hwmon attributes are all not supported */
 	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
@@ -3367,20 +3234,18 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	     attr == &sensor_dev_attr_fan1_enable.dev_attr.attr))
 		return 0;
 
-	if (!is_support_sw_smu(adev)) {
-		/* mask fan attributes if we have no bindings for this asic to expose */
-		if ((!adev->powerplay.pp_funcs->get_fan_speed_pwm &&
-		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't query fan */
-		    (!adev->powerplay.pp_funcs->get_fan_control_mode &&
-		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't query state */
-			effective_mode &= ~S_IRUGO;
+	/* mask fan attributes if we have no bindings for this asic to expose */
+	if (((amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL) &&
+	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't query fan */
+	    ((amdgpu_dpm_get_fan_control_mode(adev, &speed) == -EOPNOTSUPP) &&
+	     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't query state */
+		effective_mode &= ~S_IRUGO;
 
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
-		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't manage fan */
-		    (!adev->powerplay.pp_funcs->set_fan_control_mode &&
-		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't manage state */
-			effective_mode &= ~S_IWUSR;
-	}
+	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL) &&
+	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't manage fan */
+	      ((amdgpu_dpm_set_fan_control_mode(adev, speed) == -EOPNOTSUPP) &&
+	      attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't manage state */
+		effective_mode &= ~S_IWUSR;
 
 	if (((adev->family == AMDGPU_FAMILY_SI) ||
 		 ((adev->flags & AMD_IS_APU) &&
@@ -3397,22 +3262,20 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	    (attr == &sensor_dev_attr_power1_average.dev_attr.attr))
 		return 0;
 
-	if (!is_support_sw_smu(adev)) {
-		/* hide max/min values if we can't both query and manage the fan */
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_pwm) &&
-		     (!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
-		    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
-		     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
-			return 0;
+	/* hide max/min values if we can't both query and manage the fan */
+	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL) &&
+	      (amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL) &&
+	      (amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL) &&
+	      (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL)) &&
+	    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
+	     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
+		return 0;
 
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
-		    (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
-		     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
-			return 0;
-	}
+	if ((amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL) &&
+	     (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL) &&
+	     (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
+	     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
+		return 0;
 
 	if ((adev->family == AMDGPU_FAMILY_SI ||	/* not implemented yet */
 	     adev->family == AMDGPU_FAMILY_KV) &&	/* not implemented yet */
@@ -3542,14 +3405,15 @@ static void amdgpu_debugfs_prints_cpu_info(struct seq_file *m,
 	uint16_t *p_val;
 	uint32_t size;
 	int i;
+	uint32_t num_cpu_cores = amdgpu_dpm_get_num_cpu_cores(adev);
 
-	if (is_support_cclk_dpm(adev)) {
-		p_val = kcalloc(adev->smu.cpu_core_num, sizeof(uint16_t),
+	if (amdgpu_dpm_is_cclk_dpm_supported(adev)) {
+		p_val = kcalloc(num_cpu_cores, sizeof(uint16_t),
 				GFP_KERNEL);
 
 		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_CPU_CLK,
 					    (void *)p_val, &size)) {
-			for (i = 0; i < adev->smu.cpu_core_num; i++)
+			for (i = 0; i < num_cpu_cores; i++)
 				seq_printf(m, "\t%u MHz (CPU%d)\n",
 					   *(p_val + i), i);
 		}
@@ -3677,27 +3541,11 @@ static int amdgpu_debugfs_pm_info_show(struct seq_file *m, void *unused)
 		return r;
 	}
 
-	if (!adev->pm.dpm_enabled) {
-		seq_printf(m, "dpm not enabled\n");
-		pm_runtime_mark_last_busy(dev->dev);
-		pm_runtime_put_autosuspend(dev->dev);
-		return 0;
-	}
-
-	if (!is_support_sw_smu(adev) &&
-	    adev->powerplay.pp_funcs->debugfs_print_current_performance_level) {
-		mutex_lock(&adev->pm.mutex);
-		if (adev->powerplay.pp_funcs->debugfs_print_current_performance_level)
-			adev->powerplay.pp_funcs->debugfs_print_current_performance_level(adev, m);
-		else
-			seq_printf(m, "Debugfs support not implemented for this asic\n");
-		mutex_unlock(&adev->pm.mutex);
-		r = 0;
-	} else {
+	if (amdgpu_dpm_debugfs_print_current_performance_level(adev, m)) {
 		r = amdgpu_debugfs_pm_info_pp(m, adev);
+		if (r)
+			goto out;
 	}
-	if (r)
-		goto out;
 
 	amdgpu_device_ip_get_clockgating_state(adev, &flags);
 
@@ -3723,21 +3571,18 @@ static ssize_t amdgpu_pm_prv_buffer_read(struct file *f, char __user *buf,
 					 size_t size, loff_t *pos)
 {
 	struct amdgpu_device *adev = file_inode(f)->i_private;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-	void *pp_handle = adev->powerplay.pp_handle;
 	size_t smu_prv_buf_size;
 	void *smu_prv_buf;
+	int ret = 0;
 
 	if (amdgpu_in_reset(adev))
 		return -EPERM;
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if (pp_funcs && pp_funcs->get_smu_prv_buf_details)
-		pp_funcs->get_smu_prv_buf_details(pp_handle, &smu_prv_buf,
-						  &smu_prv_buf_size);
-	else
-		return -ENOSYS;
+	ret = amdgpu_dpm_get_smu_prv_buf_details(adev, &smu_prv_buf, &smu_prv_buf_size);
+	if (ret)
+		return ret;
 
 	if (!smu_prv_buf || !smu_prv_buf_size)
 		return -EINVAL;
@@ -3770,6 +3615,6 @@ void amdgpu_debugfs_pm_init(struct amdgpu_device *adev)
 					 &amdgpu_debugfs_pm_prv_buffer_fops,
 					 adev->pm.smu_prv_buffer_size);
 
-	amdgpu_smu_stb_debug_fs_init(adev);
+	amdgpu_dpm_stb_debug_fs_init(adev);
 #endif
 }
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 7289d379a9fb..039c40b1d0cb 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -262,9 +262,6 @@ enum amdgpu_pcie_gen {
 #define amdgpu_dpm_pre_set_power_state(adev) \
 		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
 
-#define amdgpu_dpm_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->set_power_state((adev)->powerplay.pp_handle))
-
 #define amdgpu_dpm_post_set_power_state(adev) \
 		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
 
@@ -280,100 +277,13 @@ enum amdgpu_pcie_gen {
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
 
-#define amdgpu_dpm_set_fan_control_mode(adev, m) \
-		((adev)->powerplay.pp_funcs->set_fan_control_mode((adev)->powerplay.pp_handle, (m)))
-
-#define amdgpu_dpm_get_fan_control_mode(adev) \
-		((adev)->powerplay.pp_funcs->get_fan_control_mode((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_fan_speed_pwm(adev, s) \
-		((adev)->powerplay.pp_funcs->set_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
-
-#define amdgpu_dpm_get_fan_speed_pwm(adev, s) \
-		((adev)->powerplay.pp_funcs->get_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
-
-#define amdgpu_dpm_get_fan_speed_rpm(adev, s) \
-		((adev)->powerplay.pp_funcs->get_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
-
-#define amdgpu_dpm_set_fan_speed_rpm(adev, s) \
-		((adev)->powerplay.pp_funcs->set_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
-
-#define amdgpu_dpm_force_performance_level(adev, l) \
-		((adev)->powerplay.pp_funcs->force_performance_level((adev)->powerplay.pp_handle, (l)))
-
-#define amdgpu_dpm_get_current_power_state(adev) \
-		((adev)->powerplay.pp_funcs->get_current_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_get_pp_num_states(adev, data) \
-		((adev)->powerplay.pp_funcs->get_pp_num_states((adev)->powerplay.pp_handle, data))
-
-#define amdgpu_dpm_get_pp_table(adev, table) \
-		((adev)->powerplay.pp_funcs->get_pp_table((adev)->powerplay.pp_handle, table))
-
-#define amdgpu_dpm_set_pp_table(adev, buf, size) \
-		((adev)->powerplay.pp_funcs->set_pp_table((adev)->powerplay.pp_handle, buf, size))
-
-#define amdgpu_dpm_print_clock_levels(adev, type, buf) \
-		((adev)->powerplay.pp_funcs->print_clock_levels((adev)->powerplay.pp_handle, type, buf))
-
-#define amdgpu_dpm_force_clock_level(adev, type, level) \
-		((adev)->powerplay.pp_funcs->force_clock_level((adev)->powerplay.pp_handle, type, level))
-
-#define amdgpu_dpm_get_sclk_od(adev) \
-		((adev)->powerplay.pp_funcs->get_sclk_od((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_sclk_od(adev, value) \
-		((adev)->powerplay.pp_funcs->set_sclk_od((adev)->powerplay.pp_handle, value))
-
-#define amdgpu_dpm_get_mclk_od(adev) \
-		((adev)->powerplay.pp_funcs->get_mclk_od((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_mclk_od(adev, value) \
-		((adev)->powerplay.pp_funcs->set_mclk_od((adev)->powerplay.pp_handle, value))
-
-#define amdgpu_dpm_dispatch_task(adev, task_id, user_state)		\
-		((adev)->powerplay.pp_funcs->dispatch_tasks)((adev)->powerplay.pp_handle, (task_id), (user_state))
-
 #define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
 		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
 
-#define amdgpu_dpm_get_vce_clock_state(adev, i)				\
-		((adev)->powerplay.pp_funcs->get_vce_clock_state((adev)->powerplay.pp_handle, (i)))
-
-#define amdgpu_dpm_get_performance_level(adev)				\
-		((adev)->powerplay.pp_funcs->get_performance_level((adev)->powerplay.pp_handle))
-
 #define amdgpu_dpm_reset_power_profile_state(adev, request) \
 		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
 			(adev)->powerplay.pp_handle, request))
 
-#define amdgpu_dpm_get_power_profile_mode(adev, buf) \
-		((adev)->powerplay.pp_funcs->get_power_profile_mode(\
-			(adev)->powerplay.pp_handle, buf))
-
-#define amdgpu_dpm_set_power_profile_mode(adev, parameter, size) \
-		((adev)->powerplay.pp_funcs->set_power_profile_mode(\
-			(adev)->powerplay.pp_handle, parameter, size))
-
-#define amdgpu_dpm_set_fine_grain_clk_vol(adev, type, parameter, size) \
-		((adev)->powerplay.pp_funcs->set_fine_grain_clk_vol(\
-			(adev)->powerplay.pp_handle, type, parameter, size))
-
-#define amdgpu_dpm_odn_edit_dpm_table(adev, type, parameter, size) \
-		((adev)->powerplay.pp_funcs->odn_edit_dpm_table(\
-			(adev)->powerplay.pp_handle, type, parameter, size))
-
-#define amdgpu_dpm_get_ppfeature_status(adev, buf) \
-		((adev)->powerplay.pp_funcs->get_ppfeature_status(\
-			(adev)->powerplay.pp_handle, (buf)))
-
-#define amdgpu_dpm_set_ppfeature_status(adev, ppfeatures) \
-		((adev)->powerplay.pp_funcs->set_ppfeature_status(\
-			(adev)->powerplay.pp_handle, (ppfeatures)))
-
-#define amdgpu_dpm_get_gpu_metrics(adev, table) \
-		((adev)->powerplay.pp_funcs->get_gpu_metrics((adev)->powerplay.pp_handle, table))
-
 struct amdgpu_dpm {
 	struct amdgpu_ps        *ps;
 	/* number of valid power states */
@@ -598,4 +508,74 @@ void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
 				 enum gfx_change_state state);
 int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
 			    void *umc_ecc);
+struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
+						     uint32_t idx);
+void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev, enum amd_pm_state_type *state);
+void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
+				enum amd_pm_state_type state);
+enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev);
+int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
+				       enum amd_dpm_forced_level level);
+int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
+				 struct pp_states_info *states);
+int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
+			      enum amd_pp_task task_id,
+			      enum amd_pm_state_type *user_state);
+int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table);
+int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
+				      uint32_t type,
+				      long *input,
+				      uint32_t size);
+int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
+				  uint32_t type,
+				  long *input,
+				  uint32_t size);
+int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf);
+int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
+				    uint64_t ppfeature_masks);
+int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf);
+int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
+				 enum pp_clock_type type,
+				 uint32_t mask);
+int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev);
+int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value);
+int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev);
+int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value);
+int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
+				      char *buf);
+int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
+				      long *input, uint32_t size);
+int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void **table);
+int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t *fan_mode);
+int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t speed);
+int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t *speed);
+int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t *speed);
+int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t speed);
+int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t mode);
+int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
+			       uint32_t *limit,
+			       enum pp_power_limit_level pp_limit_level,
+			       enum pp_power_type power_type);
+int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
+			       uint32_t limit);
+int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev);
+int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
+						       struct seq_file *m);
+int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+				       void **addr,
+				       size_t *size);
+int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev);
+int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+			    const char *buf,
+			    size_t size);
+int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev);
+void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev);
 #endif
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index ef7d0e377965..eaed5aba7547 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -470,9 +470,6 @@ bool is_support_cclk_dpm(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = &adev->smu;
 
-	if (!is_support_sw_smu(adev))
-		return false;
-
 	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
 		return false;
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 03/17] drm/amd/pm: do not expose power implementation details to display
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
  2021-11-30  7:42 ` [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power Evan Quan
  2021-11-30  7:42 ` [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 04/17] drm/amd/pm: do not expose those APIs used internally only in amdgpu_dpm.c Evan Quan
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Display is another client of our power APIs. It's not proper to spike
into power implementation details there.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Ic897131e16473ed29d3d7586d822a55c64e6574a
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |   6 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  | 246 +++++++-----------
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 218 ++++++++++++++++
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |  38 +++
 4 files changed, 344 insertions(+), 164 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 53f7fdf956eb..92480cc57623 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2139,12 +2139,8 @@ static void s3_handle_mst(struct drm_device *dev, bool suspend)
 
 static int amdgpu_dm_smu_write_watermarks_table(struct amdgpu_device *adev)
 {
-	struct smu_context *smu = &adev->smu;
 	int ret = 0;
 
-	if (!is_support_sw_smu(adev))
-		return 0;
-
 	/* This interface is for dGPU Navi1x.Linux dc-pplib interface depends
 	 * on window driver dc implementation.
 	 * For Navi1x, clock settings of dcn watermarks are fixed. the settings
@@ -2183,7 +2179,7 @@ static int amdgpu_dm_smu_write_watermarks_table(struct amdgpu_device *adev)
 		return 0;
 	}
 
-	ret = smu_write_watermarks_table(smu);
+	ret = amdgpu_dpm_write_watermarks_table(adev);
 	if (ret) {
 		DRM_ERROR("Failed to update WMTABLE!\n");
 		return ret;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
index eba270121698..46550811da00 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
@@ -99,10 +99,7 @@ bool dm_pp_apply_display_requirements(
 			adev->pm.pm_display_cfg.displays[i].controller_id = dc_cfg->pipe_idx + 1;
 		}
 
-		if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->display_configuration_change)
-			adev->powerplay.pp_funcs->display_configuration_change(
-				adev->powerplay.pp_handle,
-				&adev->pm.pm_display_cfg);
+		amdgpu_dpm_display_configuration_change(adev, &adev->pm.pm_display_cfg);
 
 		amdgpu_pm_compute_clocks(adev);
 	}
@@ -298,31 +295,25 @@ bool dm_pp_get_clock_levels_by_type(
 		struct dm_pp_clock_levels *dc_clks)
 {
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
 	struct amd_pp_clocks pp_clks = { 0 };
 	struct amd_pp_simple_clock_info validation_clks = { 0 };
 	uint32_t i;
 
-	if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->get_clock_by_type) {
-		if (adev->powerplay.pp_funcs->get_clock_by_type(pp_handle,
-			dc_to_pp_clock_type(clk_type), &pp_clks)) {
-			/* Error in pplib. Provide default values. */
-			get_default_clock_levels(clk_type, dc_clks);
-			return true;
-		}
+	if (amdgpu_dpm_get_clock_by_type(adev,
+		dc_to_pp_clock_type(clk_type), &pp_clks)) {
+		/* Error in pplib. Provide default values. */
+		get_default_clock_levels(clk_type, dc_clks);
+		return true;
 	}
 
 	pp_to_dc_clock_levels(&pp_clks, dc_clks, clk_type);
 
-	if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->get_display_mode_validation_clocks) {
-		if (adev->powerplay.pp_funcs->get_display_mode_validation_clocks(
-						pp_handle, &validation_clks)) {
-			/* Error in pplib. Provide default values. */
-			DRM_INFO("DM_PPLIB: Warning: using default validation clocks!\n");
-			validation_clks.engine_max_clock = 72000;
-			validation_clks.memory_max_clock = 80000;
-			validation_clks.level = 0;
-		}
+	if (amdgpu_dpm_get_display_mode_validation_clks(adev, &validation_clks)) {
+		/* Error in pplib. Provide default values. */
+		DRM_INFO("DM_PPLIB: Warning: using default validation clocks!\n");
+		validation_clks.engine_max_clock = 72000;
+		validation_clks.memory_max_clock = 80000;
+		validation_clks.level = 0;
 	}
 
 	DRM_INFO("DM_PPLIB: Validation clocks:\n");
@@ -370,18 +361,14 @@ bool dm_pp_get_clock_levels_by_type_with_latency(
 	struct dm_pp_clock_levels_with_latency *clk_level_info)
 {
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
 	struct pp_clock_levels_with_latency pp_clks = { 0 };
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	int ret;
 
-	if (pp_funcs && pp_funcs->get_clock_by_type_with_latency) {
-		ret = pp_funcs->get_clock_by_type_with_latency(pp_handle,
-						dc_to_pp_clock_type(clk_type),
-						&pp_clks);
-		if (ret)
-			return false;
-	}
+	ret = amdgpu_dpm_get_clock_by_type_with_latency(adev,
+					dc_to_pp_clock_type(clk_type),
+					&pp_clks);
+	if (ret)
+		return false;
 
 	pp_to_dc_clock_levels_with_latency(&pp_clks, clk_level_info, clk_type);
 
@@ -394,18 +381,14 @@ bool dm_pp_get_clock_levels_by_type_with_voltage(
 	struct dm_pp_clock_levels_with_voltage *clk_level_info)
 {
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
 	struct pp_clock_levels_with_voltage pp_clk_info = {0};
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	int ret;
 
-	if (pp_funcs && pp_funcs->get_clock_by_type_with_voltage) {
-		ret = pp_funcs->get_clock_by_type_with_voltage(pp_handle,
-						dc_to_pp_clock_type(clk_type),
-						&pp_clk_info);
-		if (ret)
-			return false;
-	}
+	ret = amdgpu_dpm_get_clock_by_type_with_voltage(adev,
+					dc_to_pp_clock_type(clk_type),
+					&pp_clk_info);
+	if (ret)
+		return false;
 
 	pp_to_dc_clock_levels_with_voltage(&pp_clk_info, clk_level_info, clk_type);
 
@@ -417,19 +400,16 @@ bool dm_pp_notify_wm_clock_changes(
 	struct dm_pp_wm_sets_with_clock_ranges *wm_with_clock_ranges)
 {
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
 	/*
 	 * Limit this watermark setting for Polaris for now
 	 * TODO: expand this to other ASICs
 	 */
-	if ((adev->asic_type >= CHIP_POLARIS10) && (adev->asic_type <= CHIP_VEGAM)
-	     && pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges) {
-		if (!pp_funcs->set_watermarks_for_clocks_ranges(pp_handle,
+	if ((adev->asic_type >= CHIP_POLARIS10) &&
+	     (adev->asic_type <= CHIP_VEGAM) &&
+	     !amdgpu_dpm_set_watermarks_for_clocks_ranges(adev,
 						(void *)wm_with_clock_ranges))
 			return true;
-	}
 
 	return false;
 }
@@ -456,12 +436,10 @@ bool dm_pp_apply_clock_for_voltage_request(
 	if (!pp_clock_request.clock_type)
 		return false;
 
-	if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->display_clock_voltage_request)
-		ret = adev->powerplay.pp_funcs->display_clock_voltage_request(
-			adev->powerplay.pp_handle,
-			&pp_clock_request);
-	if (ret)
+	ret = amdgpu_dpm_display_clock_voltage_request(adev, &pp_clock_request);
+	if (ret && (ret != -EOPNOTSUPP))
 		return false;
+
 	return true;
 }
 
@@ -471,15 +449,8 @@ bool dm_pp_get_static_clocks(
 {
 	struct amdgpu_device *adev = ctx->driver_context;
 	struct amd_pp_clock_info pp_clk_info = {0};
-	int ret = 0;
 
-	if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->get_current_clocks)
-		ret = adev->powerplay.pp_funcs->get_current_clocks(
-			adev->powerplay.pp_handle,
-			&pp_clk_info);
-	else
-		return false;
-	if (ret)
+	if (amdgpu_dpm_get_current_clocks(adev, &pp_clk_info))
 		return false;
 
 	static_clk_info->max_clocks_state = pp_to_dc_powerlevel_state(pp_clk_info.max_clocks_state);
@@ -494,8 +465,6 @@ static void pp_rv_set_wm_ranges(struct pp_smu *pp,
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct dm_pp_wm_sets_with_clock_ranges_soc15 wm_with_clock_ranges;
 	struct dm_pp_clock_range_for_dmif_wm_set_soc15 *wm_dce_clocks = wm_with_clock_ranges.wm_dmif_clocks_ranges;
 	struct dm_pp_clock_range_for_mcif_wm_set_soc15 *wm_soc_clocks = wm_with_clock_ranges.wm_mcif_clocks_ranges;
@@ -536,72 +505,48 @@ static void pp_rv_set_wm_ranges(struct pp_smu *pp,
 				ranges->writer_wm_sets[i].min_drain_clk_mhz * 1000;
 	}
 
-	if (pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges)
-		pp_funcs->set_watermarks_for_clocks_ranges(pp_handle,
-							   &wm_with_clock_ranges);
+	amdgpu_dpm_set_watermarks_for_clocks_ranges(adev,
+						    &wm_with_clock_ranges);
 }
 
 static void pp_rv_set_pme_wa_enable(struct pp_smu *pp)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (pp_funcs && pp_funcs->notify_smu_enable_pwe)
-		pp_funcs->notify_smu_enable_pwe(pp_handle);
+	amdgpu_dpm_notify_smu_enable_pwe(adev);
 }
 
 static void pp_rv_set_active_display_count(struct pp_smu *pp, int count)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (!pp_funcs || !pp_funcs->set_active_display_count)
-		return;
-
-	pp_funcs->set_active_display_count(pp_handle, count);
+	amdgpu_dpm_set_active_display_count(adev, count);
 }
 
 static void pp_rv_set_min_deep_sleep_dcfclk(struct pp_smu *pp, int clock)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-
-	if (!pp_funcs || !pp_funcs->set_min_deep_sleep_dcefclk)
-		return;
 
-	pp_funcs->set_min_deep_sleep_dcefclk(pp_handle, clock);
+	amdgpu_dpm_set_min_deep_sleep_dcefclk(adev, clock);
 }
 
 static void pp_rv_set_hard_min_dcefclk_by_freq(struct pp_smu *pp, int clock)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (!pp_funcs || !pp_funcs->set_hard_min_dcefclk_by_freq)
-		return;
-
-	pp_funcs->set_hard_min_dcefclk_by_freq(pp_handle, clock);
+	amdgpu_dpm_set_hard_min_dcefclk_by_freq(adev, clock);
 }
 
 static void pp_rv_set_hard_min_fclk_by_freq(struct pp_smu *pp, int mhz)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-
-	if (!pp_funcs || !pp_funcs->set_hard_min_fclk_by_freq)
-		return;
 
-	pp_funcs->set_hard_min_fclk_by_freq(pp_handle, mhz);
+	amdgpu_dpm_set_hard_min_fclk_by_freq(adev, mhz);
 }
 
 static enum pp_smu_status pp_nv_set_wm_ranges(struct pp_smu *pp,
@@ -609,11 +554,8 @@ static enum pp_smu_status pp_nv_set_wm_ranges(struct pp_smu *pp,
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges)
-		pp_funcs->set_watermarks_for_clocks_ranges(pp_handle, ranges);
+	amdgpu_dpm_set_watermarks_for_clocks_ranges(adev, ranges);
 
 	return PP_SMU_RESULT_OK;
 }
@@ -622,14 +564,13 @@ static enum pp_smu_status pp_nv_set_display_count(struct pp_smu *pp, int count)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
-	if (!pp_funcs || !pp_funcs->set_active_display_count)
+	ret = amdgpu_dpm_set_active_display_count(adev, count);
+	if (ret == -EOPNOTSUPP)
 		return PP_SMU_RESULT_UNSUPPORTED;
-
-	/* 0: successful or smu.ppt_funcs->set_display_count = NULL;  1: fail */
-	if (pp_funcs->set_active_display_count(pp_handle, count))
+	else if (ret)
+		/* 0: successful or smu.ppt_funcs->set_display_count = NULL;  1: fail */
 		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
@@ -640,14 +581,13 @@ pp_nv_set_min_deep_sleep_dcfclk(struct pp_smu *pp, int mhz)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-
-	if (!pp_funcs || !pp_funcs->set_min_deep_sleep_dcefclk)
-		return PP_SMU_RESULT_UNSUPPORTED;
+	int ret = 0;
 
 	/* 0: successful or smu.ppt_funcs->set_deep_sleep_dcefclk = NULL;1: fail */
-	if (pp_funcs->set_min_deep_sleep_dcefclk(pp_handle, mhz))
+	ret = amdgpu_dpm_set_min_deep_sleep_dcefclk(adev, mhz);
+	if (ret == -EOPNOTSUPP)
+		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
 		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
@@ -658,12 +598,8 @@ static enum pp_smu_status pp_nv_set_hard_min_dcefclk_by_freq(
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_display_clock_request clock_req;
-
-	if (!pp_funcs || !pp_funcs->display_clock_voltage_request)
-		return PP_SMU_RESULT_UNSUPPORTED;
+	int ret = 0;
 
 	clock_req.clock_type = amd_pp_dcef_clock;
 	clock_req.clock_freq_in_khz = mhz * 1000;
@@ -671,7 +607,10 @@ static enum pp_smu_status pp_nv_set_hard_min_dcefclk_by_freq(
 	/* 0: successful or smu.ppt_funcs->display_clock_voltage_request = NULL
 	 * 1: fail
 	 */
-	if (pp_funcs->display_clock_voltage_request(pp_handle, &clock_req))
+	ret = amdgpu_dpm_display_clock_voltage_request(adev, &clock_req);
+	if (ret == -EOPNOTSUPP)
+		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
 		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
@@ -682,12 +621,8 @@ pp_nv_set_hard_min_uclk_by_freq(struct pp_smu *pp, int mhz)
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_display_clock_request clock_req;
-
-	if (!pp_funcs || !pp_funcs->display_clock_voltage_request)
-		return PP_SMU_RESULT_UNSUPPORTED;
+	int ret = 0;
 
 	clock_req.clock_type = amd_pp_mem_clock;
 	clock_req.clock_freq_in_khz = mhz * 1000;
@@ -695,7 +630,10 @@ pp_nv_set_hard_min_uclk_by_freq(struct pp_smu *pp, int mhz)
 	/* 0: successful or smu.ppt_funcs->display_clock_voltage_request = NULL
 	 * 1: fail
 	 */
-	if (pp_funcs->display_clock_voltage_request(pp_handle, &clock_req))
+	ret = amdgpu_dpm_display_clock_voltage_request(adev, &clock_req);
+	if (ret == -EOPNOTSUPP)
+		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
 		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
@@ -706,14 +644,10 @@ static enum pp_smu_status pp_nv_set_pstate_handshake_support(
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (pp_funcs && pp_funcs->display_disable_memory_clock_switch) {
-		if (pp_funcs->display_disable_memory_clock_switch(pp_handle,
-								  !pstate_handshake_supported))
-			return PP_SMU_RESULT_FAIL;
-	}
+	if (amdgpu_dpm_display_disable_memory_clock_switch(adev,
+							  !pstate_handshake_supported))
+		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
 }
@@ -723,12 +657,8 @@ static enum pp_smu_status pp_nv_set_voltage_by_freq(struct pp_smu *pp,
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_display_clock_request clock_req;
-
-	if (!pp_funcs || !pp_funcs->display_clock_voltage_request)
-		return PP_SMU_RESULT_UNSUPPORTED;
+	int ret = 0;
 
 	switch (clock_id) {
 	case PP_SMU_NV_DISPCLK:
@@ -748,7 +678,10 @@ static enum pp_smu_status pp_nv_set_voltage_by_freq(struct pp_smu *pp,
 	/* 0: successful or smu.ppt_funcs->display_clock_voltage_request = NULL
 	 * 1: fail
 	 */
-	if (pp_funcs->display_clock_voltage_request(pp_handle, &clock_req))
+	ret = amdgpu_dpm_display_clock_voltage_request(adev, &clock_req);
+	if (ret == -EOPNOTSUPP)
+		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
 		return PP_SMU_RESULT_FAIL;
 
 	return PP_SMU_RESULT_OK;
@@ -759,16 +692,16 @@ static enum pp_smu_status pp_nv_get_maximum_sustainable_clocks(
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
-	if (!pp_funcs || !pp_funcs->get_max_sustainable_clocks_by_dc)
+	ret = amdgpu_dpm_get_max_sustainable_clocks_by_dc(adev,
+							  max_clocks);
+	if (ret == -EOPNOTSUPP)
 		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
+		return PP_SMU_RESULT_FAIL;
 
-	if (!pp_funcs->get_max_sustainable_clocks_by_dc(pp_handle, max_clocks))
-		return PP_SMU_RESULT_OK;
-
-	return PP_SMU_RESULT_FAIL;
+	return PP_SMU_RESULT_OK;
 }
 
 static enum pp_smu_status pp_nv_get_uclk_dpm_states(struct pp_smu *pp,
@@ -776,18 +709,17 @@ static enum pp_smu_status pp_nv_get_uclk_dpm_states(struct pp_smu *pp,
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
-	if (!pp_funcs || !pp_funcs->get_uclk_dpm_states)
+	ret = amdgpu_dpm_get_uclk_dpm_states(adev,
+					     clock_values_in_khz,
+					    num_states);
+	if (ret == -EOPNOTSUPP)
 		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
+		return PP_SMU_RESULT_FAIL;
 
-	if (!pp_funcs->get_uclk_dpm_states(pp_handle,
-					   clock_values_in_khz,
-					   num_states))
-		return PP_SMU_RESULT_OK;
-
-	return PP_SMU_RESULT_FAIL;
+	return PP_SMU_RESULT_OK;
 }
 
 static enum pp_smu_status pp_rn_get_dpm_clock_table(
@@ -795,16 +727,15 @@ static enum pp_smu_status pp_rn_get_dpm_clock_table(
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
-	if (!pp_funcs || !pp_funcs->get_dpm_clock_table)
+	ret = amdgpu_dpm_get_dpm_clock_table(adev, clock_table);
+	if (ret == -EOPNOTSUPP)
 		return PP_SMU_RESULT_UNSUPPORTED;
+	else if (ret)
+		return PP_SMU_RESULT_FAIL;
 
-	if (!pp_funcs->get_dpm_clock_table(pp_handle, clock_table))
-		return PP_SMU_RESULT_OK;
-
-	return PP_SMU_RESULT_FAIL;
+	return PP_SMU_RESULT_OK;
 }
 
 static enum pp_smu_status pp_rn_set_wm_ranges(struct pp_smu *pp,
@@ -812,11 +743,8 @@ static enum pp_smu_status pp_rn_set_wm_ranges(struct pp_smu *pp,
 {
 	const struct dc_context *ctx = pp->dm;
 	struct amdgpu_device *adev = ctx->driver_context;
-	void *pp_handle = adev->powerplay.pp_handle;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (pp_funcs && pp_funcs->set_watermarks_for_clocks_ranges)
-		pp_funcs->set_watermarks_for_clocks_ranges(pp_handle, ranges);
+	amdgpu_dpm_set_watermarks_for_clocks_ranges(adev, ranges);
 
 	return PP_SMU_RESULT_OK;
 }
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 3c59f16c7a6f..4aa5cca66048 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -1664,6 +1664,14 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
 	}
 }
 
+int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
+{
+	if (!is_support_sw_smu(adev))
+		return 0;
+
+	return smu_write_watermarks_table(&adev->smu);
+}
+
 int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
 			      enum smu_event_type event,
 			      uint64_t event_arg)
@@ -2165,3 +2173,213 @@ void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
 
 	amdgpu_smu_stb_debug_fs_init(adev);
 }
+
+int amdgpu_dpm_display_configuration_change(struct amdgpu_device *adev,
+					    const struct amd_pp_display_configuration *input)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->display_configuration_change)
+		return 0;
+
+	return pp_funcs->display_configuration_change(adev->powerplay.pp_handle,
+						      input);
+}
+
+int amdgpu_dpm_get_clock_by_type(struct amdgpu_device *adev,
+				 enum amd_pp_clock_type type,
+				 struct amd_pp_clocks *clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_clock_by_type)
+		return 0;
+
+	return pp_funcs->get_clock_by_type(adev->powerplay.pp_handle,
+					   type,
+					   clocks);
+}
+
+int amdgpu_dpm_get_display_mode_validation_clks(struct amdgpu_device *adev,
+						struct amd_pp_simple_clock_info *clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_display_mode_validation_clocks)
+		return 0;
+
+	return pp_funcs->get_display_mode_validation_clocks(adev->powerplay.pp_handle,
+							    clocks);
+}
+
+int amdgpu_dpm_get_clock_by_type_with_latency(struct amdgpu_device *adev,
+					      enum amd_pp_clock_type type,
+					      struct pp_clock_levels_with_latency *clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_clock_by_type_with_latency)
+		return 0;
+
+	return pp_funcs->get_clock_by_type_with_latency(adev->powerplay.pp_handle,
+							type,
+							clocks);
+}
+
+int amdgpu_dpm_get_clock_by_type_with_voltage(struct amdgpu_device *adev,
+					      enum amd_pp_clock_type type,
+					      struct pp_clock_levels_with_voltage *clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_clock_by_type_with_voltage)
+		return 0;
+
+	return pp_funcs->get_clock_by_type_with_voltage(adev->powerplay.pp_handle,
+							type,
+							clocks);
+}
+
+int amdgpu_dpm_set_watermarks_for_clocks_ranges(struct amdgpu_device *adev,
+					       void *clock_ranges)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_watermarks_for_clocks_ranges)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->set_watermarks_for_clocks_ranges(adev->powerplay.pp_handle,
+							  clock_ranges);
+}
+
+int amdgpu_dpm_display_clock_voltage_request(struct amdgpu_device *adev,
+					     struct pp_display_clock_request *clock)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->display_clock_voltage_request)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->display_clock_voltage_request(adev->powerplay.pp_handle,
+						       clock);
+}
+
+int amdgpu_dpm_get_current_clocks(struct amdgpu_device *adev,
+				  struct amd_pp_clock_info *clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_current_clocks)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_current_clocks(adev->powerplay.pp_handle,
+					    clocks);
+}
+
+void amdgpu_dpm_notify_smu_enable_pwe(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->notify_smu_enable_pwe)
+		return;
+
+	pp_funcs->notify_smu_enable_pwe(adev->powerplay.pp_handle);
+}
+
+int amdgpu_dpm_set_active_display_count(struct amdgpu_device *adev,
+					uint32_t count)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_active_display_count)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->set_active_display_count(adev->powerplay.pp_handle,
+						  count);
+}
+
+int amdgpu_dpm_set_min_deep_sleep_dcefclk(struct amdgpu_device *adev,
+					  uint32_t clock)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_min_deep_sleep_dcefclk)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->set_min_deep_sleep_dcefclk(adev->powerplay.pp_handle,
+						    clock);
+}
+
+void amdgpu_dpm_set_hard_min_dcefclk_by_freq(struct amdgpu_device *adev,
+					     uint32_t clock)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_hard_min_dcefclk_by_freq)
+		return;
+
+	pp_funcs->set_hard_min_dcefclk_by_freq(adev->powerplay.pp_handle,
+					       clock);
+}
+
+void amdgpu_dpm_set_hard_min_fclk_by_freq(struct amdgpu_device *adev,
+					  uint32_t clock)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_hard_min_fclk_by_freq)
+		return;
+
+	pp_funcs->set_hard_min_fclk_by_freq(adev->powerplay.pp_handle,
+					    clock);
+}
+
+int amdgpu_dpm_display_disable_memory_clock_switch(struct amdgpu_device *adev,
+						   bool disable_memory_clock_switch)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->display_disable_memory_clock_switch)
+		return 0;
+
+	return pp_funcs->display_disable_memory_clock_switch(adev->powerplay.pp_handle,
+							     disable_memory_clock_switch);
+}
+
+int amdgpu_dpm_get_max_sustainable_clocks_by_dc(struct amdgpu_device *adev,
+						struct pp_smu_nv_clock_table *max_clocks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_max_sustainable_clocks_by_dc)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_max_sustainable_clocks_by_dc(adev->powerplay.pp_handle,
+							  max_clocks);
+}
+
+enum pp_smu_status amdgpu_dpm_get_uclk_dpm_states(struct amdgpu_device *adev,
+						  unsigned int *clock_values_in_khz,
+						  unsigned int *num_states)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_uclk_dpm_states)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_uclk_dpm_states(adev->powerplay.pp_handle,
+					     clock_values_in_khz,
+					     num_states);
+}
+
+int amdgpu_dpm_get_dpm_clock_table(struct amdgpu_device *adev,
+				   struct dpm_clocks *clock_table)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_dpm_clock_table)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_dpm_clock_table(adev->powerplay.pp_handle,
+					     clock_table);
+}
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 039c40b1d0cb..fea203a79cb4 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -500,6 +500,7 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
 				        enum pp_clock_type type,
 				        uint32_t min,
 				        uint32_t max);
+int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev);
 int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
 		       uint64_t event_arg);
 int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
@@ -578,4 +579,41 @@ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
 			    size_t size);
 int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev);
 void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev);
+int amdgpu_dpm_display_configuration_change(struct amdgpu_device *adev,
+					    const struct amd_pp_display_configuration *input);
+int amdgpu_dpm_get_clock_by_type(struct amdgpu_device *adev,
+				 enum amd_pp_clock_type type,
+				 struct amd_pp_clocks *clocks);
+int amdgpu_dpm_get_display_mode_validation_clks(struct amdgpu_device *adev,
+						struct amd_pp_simple_clock_info *clocks);
+int amdgpu_dpm_get_clock_by_type_with_latency(struct amdgpu_device *adev,
+					      enum amd_pp_clock_type type,
+					      struct pp_clock_levels_with_latency *clocks);
+int amdgpu_dpm_get_clock_by_type_with_voltage(struct amdgpu_device *adev,
+					      enum amd_pp_clock_type type,
+					      struct pp_clock_levels_with_voltage *clocks);
+int amdgpu_dpm_set_watermarks_for_clocks_ranges(struct amdgpu_device *adev,
+					       void *clock_ranges);
+int amdgpu_dpm_display_clock_voltage_request(struct amdgpu_device *adev,
+					     struct pp_display_clock_request *clock);
+int amdgpu_dpm_get_current_clocks(struct amdgpu_device *adev,
+				  struct amd_pp_clock_info *clocks);
+void amdgpu_dpm_notify_smu_enable_pwe(struct amdgpu_device *adev);
+int amdgpu_dpm_set_active_display_count(struct amdgpu_device *adev,
+					uint32_t count);
+int amdgpu_dpm_set_min_deep_sleep_dcefclk(struct amdgpu_device *adev,
+					  uint32_t clock);
+void amdgpu_dpm_set_hard_min_dcefclk_by_freq(struct amdgpu_device *adev,
+					     uint32_t clock);
+void amdgpu_dpm_set_hard_min_fclk_by_freq(struct amdgpu_device *adev,
+					  uint32_t clock);
+int amdgpu_dpm_display_disable_memory_clock_switch(struct amdgpu_device *adev,
+						   bool disable_memory_clock_switch);
+int amdgpu_dpm_get_max_sustainable_clocks_by_dc(struct amdgpu_device *adev,
+						struct pp_smu_nv_clock_table *max_clocks);
+enum pp_smu_status amdgpu_dpm_get_uclk_dpm_states(struct amdgpu_device *adev,
+						  unsigned int *clock_values_in_khz,
+						  unsigned int *num_states);
+int amdgpu_dpm_get_dpm_clock_table(struct amdgpu_device *adev,
+				   struct dpm_clocks *clock_table);
 #endif
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 04/17] drm/amd/pm: do not expose those APIs used internally only in amdgpu_dpm.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (2 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 03/17] drm/amd/pm: do not expose power implementation details to display Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c Evan Quan
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Move them to amdgpu_dpm.c instead.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I59fe0efcb47c18ec7254f3624db7a2eb78d91b8c
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c     | 25 +++++++++++++++++++++++--
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h | 23 -----------------------
 2 files changed, 23 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 4aa5cca66048..52ac3c883a6e 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -34,6 +34,27 @@
 
 #define WIDTH_4K 3840
 
+#define amdgpu_dpm_pre_set_power_state(adev) \
+		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_post_set_power_state(adev) \
+		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_display_configuration_changed(adev) \
+		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_print_power_state(adev, ps) \
+		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
+
+#define amdgpu_dpm_vblank_too_short(adev) \
+		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_enable_bapm(adev, e) \
+		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
+
+#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
+		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
+
 void amdgpu_dpm_print_class_info(u32 class, u32 class2)
 {
 	const char *s;
@@ -120,7 +141,7 @@ void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
 	pr_cont("\n");
 }
 
-void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
+static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
 {
 	struct drm_device *ddev = adev_to_drm(adev);
 	struct drm_crtc *crtc;
@@ -168,7 +189,7 @@ u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
 	return vblank_time_us;
 }
 
-u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
+static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
 {
 	struct drm_device *dev = adev_to_drm(adev);
 	struct drm_crtc *crtc;
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index fea203a79cb4..6681b878e75f 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -259,27 +259,6 @@ enum amdgpu_pcie_gen {
 	AMDGPU_PCIE_GEN_INVALID = 0xffff
 };
 
-#define amdgpu_dpm_pre_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_post_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_display_configuration_changed(adev) \
-		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_print_power_state(adev, ps) \
-		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
-
-#define amdgpu_dpm_vblank_too_short(adev) \
-		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_enable_bapm(adev, e) \
-		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
-
-#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
-		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
-
 #define amdgpu_dpm_reset_power_profile_state(adev, request) \
 		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
 			(adev)->powerplay.pp_handle, request))
@@ -412,8 +391,6 @@ void amdgpu_dpm_print_cap_info(u32 caps);
 void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
 				struct amdgpu_ps *rps);
 u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
-u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev);
-void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev);
 int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
 			   void *data, uint32_t *size);
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (3 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 04/17] drm/amd/pm: do not expose those APIs used internally only in amdgpu_dpm.c Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 12:22   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c Evan Quan
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Move them to si_dpm.c instead.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I288205cfd7c6ba09cfb22626ff70360d61ff0c67
--
v1->v2:
  - rename the API with "si_" prefix(Alex)
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 25 -----------
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 25 -----------
 drivers/gpu/drm/amd/pm/powerplay/si_dpm.c | 54 +++++++++++++++++++----
 drivers/gpu/drm/amd/pm/powerplay/si_dpm.h |  7 +++
 4 files changed, 53 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 52ac3c883a6e..fbfc07a83122 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -894,31 +894,6 @@ void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
 	}
 }
 
-enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
-						 u32 sys_mask,
-						 enum amdgpu_pcie_gen asic_gen,
-						 enum amdgpu_pcie_gen default_gen)
-{
-	switch (asic_gen) {
-	case AMDGPU_PCIE_GEN1:
-		return AMDGPU_PCIE_GEN1;
-	case AMDGPU_PCIE_GEN2:
-		return AMDGPU_PCIE_GEN2;
-	case AMDGPU_PCIE_GEN3:
-		return AMDGPU_PCIE_GEN3;
-	default:
-		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) &&
-		    (default_gen == AMDGPU_PCIE_GEN3))
-			return AMDGPU_PCIE_GEN3;
-		else if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
-			 (default_gen == AMDGPU_PCIE_GEN2))
-			return AMDGPU_PCIE_GEN2;
-		else
-			return AMDGPU_PCIE_GEN1;
-	}
-	return AMDGPU_PCIE_GEN1;
-}
-
 struct amd_vce_state*
 amdgpu_get_vce_clock_state(void *handle, u32 idx)
 {
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 6681b878e75f..f43b96dfe9d8 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -45,19 +45,6 @@ enum amdgpu_int_thermal_type {
 	THERMAL_TYPE_KV,
 };
 
-enum amdgpu_dpm_auto_throttle_src {
-	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
-	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
-};
-
-enum amdgpu_dpm_event_src {
-	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
-	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
-	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
-	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
-	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
-};
-
 struct amdgpu_ps {
 	u32 caps; /* vbios flags */
 	u32 class; /* vbios flags */
@@ -252,13 +239,6 @@ struct amdgpu_dpm_fan {
 	bool ucode_fan_control;
 };
 
-enum amdgpu_pcie_gen {
-	AMDGPU_PCIE_GEN1 = 0,
-	AMDGPU_PCIE_GEN2 = 1,
-	AMDGPU_PCIE_GEN3 = 2,
-	AMDGPU_PCIE_GEN_INVALID = 0xffff
-};
-
 #define amdgpu_dpm_reset_power_profile_state(adev, request) \
 		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
 			(adev)->powerplay.pp_handle, request))
@@ -403,11 +383,6 @@ void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
 
 void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
 
-enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
-						 u32 sys_mask,
-						 enum amdgpu_pcie_gen asic_gen,
-						 enum amdgpu_pcie_gen default_gen);
-
 struct amd_vce_state*
 amdgpu_get_vce_clock_state(void *handle, u32 idx);
 
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
index 81f82aa05ec2..4f84d8b893f1 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
@@ -96,6 +96,19 @@ union pplib_clock_info {
 	struct _ATOM_PPLIB_SI_CLOCK_INFO si;
 };
 
+enum amdgpu_dpm_auto_throttle_src {
+	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
+	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
+};
+
+enum amdgpu_dpm_event_src {
+	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
+	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
+	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
+	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
+	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
+};
+
 static const u32 r600_utc[R600_PM_NUMBER_OF_TC] =
 {
 	R600_UTC_DFLT_00,
@@ -4927,6 +4940,31 @@ static int si_populate_smc_initial_state(struct amdgpu_device *adev,
 	return 0;
 }
 
+static enum amdgpu_pcie_gen si_gen_pcie_gen_support(struct amdgpu_device *adev,
+						    u32 sys_mask,
+						    enum amdgpu_pcie_gen asic_gen,
+						    enum amdgpu_pcie_gen default_gen)
+{
+	switch (asic_gen) {
+	case AMDGPU_PCIE_GEN1:
+		return AMDGPU_PCIE_GEN1;
+	case AMDGPU_PCIE_GEN2:
+		return AMDGPU_PCIE_GEN2;
+	case AMDGPU_PCIE_GEN3:
+		return AMDGPU_PCIE_GEN3;
+	default:
+		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) &&
+		    (default_gen == AMDGPU_PCIE_GEN3))
+			return AMDGPU_PCIE_GEN3;
+		else if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
+			 (default_gen == AMDGPU_PCIE_GEN2))
+			return AMDGPU_PCIE_GEN2;
+		else
+			return AMDGPU_PCIE_GEN1;
+	}
+	return AMDGPU_PCIE_GEN1;
+}
+
 static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
 				      SISLANDS_SMC_STATETABLE *table)
 {
@@ -4989,10 +5027,10 @@ static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
 							      &table->ACPIState.level.std_vddc);
 		}
 		table->ACPIState.level.gen2PCIE =
-			(u8)amdgpu_get_pcie_gen_support(adev,
-							si_pi->sys_pcie_mask,
-							si_pi->boot_pcie_gen,
-							AMDGPU_PCIE_GEN1);
+			(u8)si_gen_pcie_gen_support(adev,
+						    si_pi->sys_pcie_mask,
+						    si_pi->boot_pcie_gen,
+						    AMDGPU_PCIE_GEN1);
 
 		if (si_pi->vddc_phase_shed_control)
 			si_populate_phase_shedding_value(adev,
@@ -7148,10 +7186,10 @@ static void si_parse_pplib_clock_info(struct amdgpu_device *adev,
 	pl->vddc = le16_to_cpu(clock_info->si.usVDDC);
 	pl->vddci = le16_to_cpu(clock_info->si.usVDDCI);
 	pl->flags = le32_to_cpu(clock_info->si.ulFlags);
-	pl->pcie_gen = amdgpu_get_pcie_gen_support(adev,
-						   si_pi->sys_pcie_mask,
-						   si_pi->boot_pcie_gen,
-						   clock_info->si.ucPCIEGen);
+	pl->pcie_gen = si_gen_pcie_gen_support(adev,
+					       si_pi->sys_pcie_mask,
+					       si_pi->boot_pcie_gen,
+					       clock_info->si.ucPCIEGen);
 
 	/* patch up vddc if necessary */
 	ret = si_get_leakage_voltage_from_leakage_index(adev, pl->vddc,
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
index bc0be6818e21..8c267682eeef 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
+++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
@@ -595,6 +595,13 @@ struct rv7xx_power_info {
 	RV770_SMC_STATETABLE smc_statetable;
 };
 
+enum amdgpu_pcie_gen {
+	AMDGPU_PCIE_GEN1 = 0,
+	AMDGPU_PCIE_GEN2 = 1,
+	AMDGPU_PCIE_GEN3 = 2,
+	AMDGPU_PCIE_GEN_INVALID = 0xffff
+};
+
 struct rv7xx_pl {
 	u32 sclk;
 	u32 mclk;
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (4 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 12:27   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv) Evan Quan
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Move it to kv_dpm.c instead.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I554332b386491a79b7913f72786f1e2cb1f8165b
--
v1->v2:
  - rename the API with "kv_" prefix(Alex)
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 23 ---------------------
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   |  2 --
 drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c | 25 ++++++++++++++++++++++-
 3 files changed, 24 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index fbfc07a83122..ecaf0081bc31 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -209,29 +209,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
 	return vrefresh;
 }
 
-bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor)
-{
-	switch (sensor) {
-	case THERMAL_TYPE_RV6XX:
-	case THERMAL_TYPE_RV770:
-	case THERMAL_TYPE_EVERGREEN:
-	case THERMAL_TYPE_SUMO:
-	case THERMAL_TYPE_NI:
-	case THERMAL_TYPE_SI:
-	case THERMAL_TYPE_CI:
-	case THERMAL_TYPE_KV:
-		return true;
-	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
-	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
-		return false; /* need special handling */
-	case THERMAL_TYPE_NONE:
-	case THERMAL_TYPE_EXTERNAL:
-	case THERMAL_TYPE_EXTERNAL_GPIO:
-	default:
-		return false;
-	}
-}
-
 union power_info {
 	struct _ATOM_POWERPLAY_INFO info;
 	struct _ATOM_POWERPLAY_INFO_V2 info_2;
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index f43b96dfe9d8..01120b302590 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -374,8 +374,6 @@ u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
 int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
 			   void *data, uint32_t *size);
 
-bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor);
-
 int amdgpu_get_platform_caps(struct amdgpu_device *adev);
 
 int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
index bcae42cef374..380a5336c74f 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
@@ -1256,6 +1256,29 @@ static void kv_dpm_enable_bapm(void *handle, bool enable)
 	}
 }
 
+static bool kv_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor)
+{
+	switch (sensor) {
+	case THERMAL_TYPE_RV6XX:
+	case THERMAL_TYPE_RV770:
+	case THERMAL_TYPE_EVERGREEN:
+	case THERMAL_TYPE_SUMO:
+	case THERMAL_TYPE_NI:
+	case THERMAL_TYPE_SI:
+	case THERMAL_TYPE_CI:
+	case THERMAL_TYPE_KV:
+		return true;
+	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
+	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
+		return false; /* need special handling */
+	case THERMAL_TYPE_NONE:
+	case THERMAL_TYPE_EXTERNAL:
+	case THERMAL_TYPE_EXTERNAL_GPIO:
+	default:
+		return false;
+	}
+}
+
 static int kv_dpm_enable(struct amdgpu_device *adev)
 {
 	struct kv_power_info *pi = kv_get_pi(adev);
@@ -1352,7 +1375,7 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
 	}
 
 	if (adev->irq.installed &&
-	    amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
+	    kv_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
 		ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
 		if (ret) {
 			DRM_ERROR("kv_set_thermal_temperature_range failed\n");
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (5 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 13:21   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 08/17] drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure Evan Quan
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Those APIs are used only by legacy ASICs(si/kv). They cannot be
shared by other ASICs. So, we create a new holder for them.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
--
v1->v2:
  - move other APIs used by si/kv in amdgpu_atombios.c to the new
    holder also(Alex)
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
 .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
 drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
 drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453 +++++++++++++++++
 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
 drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
 10 files changed, 1534 insertions(+), 1470 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
 create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
index 12a6b1c99c93..f2e447212e62 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
@@ -1083,427 +1083,6 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
 	return 0;
 }
 
-int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
-					    u32 clock,
-					    bool strobe_mode,
-					    struct atom_mpll_param *mpll_param)
-{
-	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
-	int index = GetIndexIntoMasterTable(COMMAND, ComputeMemoryClockParam);
-	u8 frev, crev;
-
-	memset(&args, 0, sizeof(args));
-	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
-
-	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
-		return -EINVAL;
-
-	switch (frev) {
-	case 2:
-		switch (crev) {
-		case 1:
-			/* SI */
-			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
-			args.ucInputFlag = 0;
-			if (strobe_mode)
-				args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
-
-			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
-
-			mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
-			mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
-			mpll_param->post_div = args.ucPostDiv;
-			mpll_param->dll_speed = args.ucDllSpeed;
-			mpll_param->bwcntl = args.ucBWCntl;
-			mpll_param->vco_mode =
-				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
-			mpll_param->yclk_sel =
-				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
-			mpll_param->qdr =
-				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
-			mpll_param->half_rate =
-				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
-			break;
-		default:
-			return -EINVAL;
-		}
-		break;
-	default:
-		return -EINVAL;
-	}
-	return 0;
-}
-
-void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
-					     u32 eng_clock, u32 mem_clock)
-{
-	SET_ENGINE_CLOCK_PS_ALLOCATION args;
-	int index = GetIndexIntoMasterTable(COMMAND, DynamicMemorySettings);
-	u32 tmp;
-
-	memset(&args, 0, sizeof(args));
-
-	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
-	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
-
-	args.ulTargetEngineClock = cpu_to_le32(tmp);
-	if (mem_clock)
-		args.sReserved.ulClock = cpu_to_le32(mem_clock & SET_CLOCK_FREQ_MASK);
-
-	amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
-}
-
-void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
-					  u16 *vddc, u16 *vddci, u16 *mvdd)
-{
-	struct amdgpu_mode_info *mode_info = &adev->mode_info;
-	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
-	u8 frev, crev;
-	u16 data_offset;
-	union firmware_info *firmware_info;
-
-	*vddc = 0;
-	*vddci = 0;
-	*mvdd = 0;
-
-	if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
-				   &frev, &crev, &data_offset)) {
-		firmware_info =
-			(union firmware_info *)(mode_info->atom_context->bios +
-						data_offset);
-		*vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage);
-		if ((frev == 2) && (crev >= 2)) {
-			*vddci = le16_to_cpu(firmware_info->info_22.usBootUpVDDCIVoltage);
-			*mvdd = le16_to_cpu(firmware_info->info_22.usBootUpMVDDCVoltage);
-		}
-	}
-}
-
-union set_voltage {
-	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
-	struct _SET_VOLTAGE_PARAMETERS v1;
-	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
-	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
-};
-
-int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
-			     u16 voltage_id, u16 *voltage)
-{
-	union set_voltage args;
-	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
-	u8 frev, crev;
-
-	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
-		return -EINVAL;
-
-	switch (crev) {
-	case 1:
-		return -EINVAL;
-	case 2:
-		args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE;
-		args.v2.ucVoltageMode = 0;
-		args.v2.usVoltageLevel = 0;
-
-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
-
-		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
-		break;
-	case 3:
-		args.v3.ucVoltageType = voltage_type;
-		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
-		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
-
-		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
-
-		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
-		break;
-	default:
-		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
-						      u16 *voltage,
-						      u16 leakage_idx)
-{
-	return amdgpu_atombios_get_max_vddc(adev, VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
-}
-
-union voltage_object_info {
-	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
-	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
-	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
-};
-
-union voltage_object {
-	struct _ATOM_VOLTAGE_OBJECT v1;
-	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
-	union _ATOM_VOLTAGE_OBJECT_V3 v3;
-};
-
-
-static ATOM_VOLTAGE_OBJECT_V3 *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_INFO_V3_1 *v3,
-									u8 voltage_type, u8 voltage_mode)
-{
-	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
-	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1, asVoltageObj[0]);
-	u8 *start = (u8 *)v3;
-
-	while (offset < size) {
-		ATOM_VOLTAGE_OBJECT_V3 *vo = (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
-		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType == voltage_type) &&
-		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode == voltage_mode))
-			return vo;
-		offset += le16_to_cpu(vo->asGpioVoltageObj.sHeader.usSize);
-	}
-	return NULL;
-}
-
-int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
-			      u8 voltage_type,
-			      u8 *svd_gpio_id, u8 *svc_gpio_id)
-{
-	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
-	u8 frev, crev;
-	u16 data_offset, size;
-	union voltage_object_info *voltage_info;
-	union voltage_object *voltage_object = NULL;
-
-	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
-				   &frev, &crev, &data_offset)) {
-		voltage_info = (union voltage_object_info *)
-			(adev->mode_info.atom_context->bios + data_offset);
-
-		switch (frev) {
-		case 3:
-			switch (crev) {
-			case 1:
-				voltage_object = (union voltage_object *)
-					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
-								      voltage_type,
-								      VOLTAGE_OBJ_SVID2);
-				if (voltage_object) {
-					*svd_gpio_id = voltage_object->v3.asSVID2Obj.ucSVDGpioId;
-					*svc_gpio_id = voltage_object->v3.asSVID2Obj.ucSVCGpioId;
-				} else {
-					return -EINVAL;
-				}
-				break;
-			default:
-				DRM_ERROR("unknown voltage object table\n");
-				return -EINVAL;
-			}
-			break;
-		default:
-			DRM_ERROR("unknown voltage object table\n");
-			return -EINVAL;
-		}
-
-	}
-	return 0;
-}
-
-bool
-amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
-				u8 voltage_type, u8 voltage_mode)
-{
-	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
-	u8 frev, crev;
-	u16 data_offset, size;
-	union voltage_object_info *voltage_info;
-
-	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
-				   &frev, &crev, &data_offset)) {
-		voltage_info = (union voltage_object_info *)
-			(adev->mode_info.atom_context->bios + data_offset);
-
-		switch (frev) {
-		case 3:
-			switch (crev) {
-			case 1:
-				if (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
-								  voltage_type, voltage_mode))
-					return true;
-				break;
-			default:
-				DRM_ERROR("unknown voltage object table\n");
-				return false;
-			}
-			break;
-		default:
-			DRM_ERROR("unknown voltage object table\n");
-			return false;
-		}
-
-	}
-	return false;
-}
-
-int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
-				      u8 voltage_type, u8 voltage_mode,
-				      struct atom_voltage_table *voltage_table)
-{
-	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
-	u8 frev, crev;
-	u16 data_offset, size;
-	int i;
-	union voltage_object_info *voltage_info;
-	union voltage_object *voltage_object = NULL;
-
-	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
-				   &frev, &crev, &data_offset)) {
-		voltage_info = (union voltage_object_info *)
-			(adev->mode_info.atom_context->bios + data_offset);
-
-		switch (frev) {
-		case 3:
-			switch (crev) {
-			case 1:
-				voltage_object = (union voltage_object *)
-					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
-								      voltage_type, voltage_mode);
-				if (voltage_object) {
-					ATOM_GPIO_VOLTAGE_OBJECT_V3 *gpio =
-						&voltage_object->v3.asGpioVoltageObj;
-					VOLTAGE_LUT_ENTRY_V2 *lut;
-					if (gpio->ucGpioEntryNum > MAX_VOLTAGE_ENTRIES)
-						return -EINVAL;
-					lut = &gpio->asVolGpioLut[0];
-					for (i = 0; i < gpio->ucGpioEntryNum; i++) {
-						voltage_table->entries[i].value =
-							le16_to_cpu(lut->usVoltageValue);
-						voltage_table->entries[i].smio_low =
-							le32_to_cpu(lut->ulVoltageId);
-						lut = (VOLTAGE_LUT_ENTRY_V2 *)
-							((u8 *)lut + sizeof(VOLTAGE_LUT_ENTRY_V2));
-					}
-					voltage_table->mask_low = le32_to_cpu(gpio->ulGpioMaskVal);
-					voltage_table->count = gpio->ucGpioEntryNum;
-					voltage_table->phase_delay = gpio->ucPhaseDelay;
-					return 0;
-				}
-				break;
-			default:
-				DRM_ERROR("unknown voltage object table\n");
-				return -EINVAL;
-			}
-			break;
-		default:
-			DRM_ERROR("unknown voltage object table\n");
-			return -EINVAL;
-		}
-	}
-	return -EINVAL;
-}
-
-union vram_info {
-	struct _ATOM_VRAM_INFO_V3 v1_3;
-	struct _ATOM_VRAM_INFO_V4 v1_4;
-	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
-};
-
-#define MEM_ID_MASK           0xff000000
-#define MEM_ID_SHIFT          24
-#define CLOCK_RANGE_MASK      0x00ffffff
-#define CLOCK_RANGE_SHIFT     0
-#define LOW_NIBBLE_MASK       0xf
-#define DATA_EQU_PREV         0
-#define DATA_FROM_TABLE       4
-
-int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
-				      u8 module_index,
-				      struct atom_mc_reg_table *reg_table)
-{
-	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
-	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
-	u32 i = 0, j;
-	u16 data_offset, size;
-	union vram_info *vram_info;
-
-	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
-
-	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
-				   &frev, &crev, &data_offset)) {
-		vram_info = (union vram_info *)
-			(adev->mode_info.atom_context->bios + data_offset);
-		switch (frev) {
-		case 1:
-			DRM_ERROR("old table version %d, %d\n", frev, crev);
-			return -EINVAL;
-		case 2:
-			switch (crev) {
-			case 1:
-				if (module_index < vram_info->v2_1.ucNumOfVRAMModule) {
-					ATOM_INIT_REG_BLOCK *reg_block =
-						(ATOM_INIT_REG_BLOCK *)
-						((u8 *)vram_info + le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
-					ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
-						(ATOM_MEMORY_SETTING_DATA_BLOCK *)
-						((u8 *)reg_block + (2 * sizeof(u16)) +
-						 le16_to_cpu(reg_block->usRegIndexTblSize));
-					ATOM_INIT_REG_INDEX_FORMAT *format = &reg_block->asRegIndexBuf[0];
-					num_entries = (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
-							   sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
-					if (num_entries > VBIOS_MC_REGISTER_ARRAY_SIZE)
-						return -EINVAL;
-					while (i < num_entries) {
-						if (format->ucPreRegDataLength & ACCESS_PLACEHOLDER)
-							break;
-						reg_table->mc_reg_address[i].s1 =
-							(u16)(le16_to_cpu(format->usRegIndex));
-						reg_table->mc_reg_address[i].pre_reg_data =
-							(u8)(format->ucPreRegDataLength);
-						i++;
-						format = (ATOM_INIT_REG_INDEX_FORMAT *)
-							((u8 *)format + sizeof(ATOM_INIT_REG_INDEX_FORMAT));
-					}
-					reg_table->last = i;
-					while ((le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK) &&
-					       (num_ranges < VBIOS_MAX_AC_TIMING_ENTRIES)) {
-						t_mem_id = (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
-								>> MEM_ID_SHIFT);
-						if (module_index == t_mem_id) {
-							reg_table->mc_reg_table_entry[num_ranges].mclk_max =
-								(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
-								      >> CLOCK_RANGE_SHIFT);
-							for (i = 0, j = 1; i < reg_table->last; i++) {
-								if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_FROM_TABLE) {
-									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
-										(u32)le32_to_cpu(*((u32 *)reg_data + j));
-									j++;
-								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
-									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
-										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
-								}
-							}
-							num_ranges++;
-						}
-						reg_data = (ATOM_MEMORY_SETTING_DATA_BLOCK *)
-							((u8 *)reg_data + le16_to_cpu(reg_block->usRegDataBlkSize));
-					}
-					if (le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK)
-						return -EINVAL;
-					reg_table->num_entries = num_ranges;
-				} else
-					return -EINVAL;
-				break;
-			default:
-				DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
-				return -EINVAL;
-			}
-			break;
-		default:
-			DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
-			return -EINVAL;
-		}
-		return 0;
-	}
-	return -EINVAL;
-}
-
 bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev)
 {
 	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
index 27e74b1fc260..cb5649298dcb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
@@ -160,26 +160,6 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
 				       bool strobe_mode,
 				       struct atom_clock_dividers *dividers);
 
-int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
-					    u32 clock,
-					    bool strobe_mode,
-					    struct atom_mpll_param *mpll_param);
-
-void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
-					     u32 eng_clock, u32 mem_clock);
-
-bool
-amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
-				u8 voltage_type, u8 voltage_mode);
-
-int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
-				      u8 voltage_type, u8 voltage_mode,
-				      struct atom_voltage_table *voltage_table);
-
-int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
-				      u8 module_index,
-				      struct atom_mc_reg_table *reg_table);
-
 bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev);
 
 void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev, bool lock);
@@ -190,21 +170,11 @@ void amdgpu_atombios_scratch_regs_set_backlight_level(struct amdgpu_device *adev
 bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device *adev);
 
 void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
-int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
-			     u16 voltage_id, u16 *voltage);
-int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
-						      u16 *voltage,
-						      u16 leakage_idx);
-void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
-					  u16 *vddc, u16 *vddci, u16 *mvdd);
 int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
 				       u8 clock_type,
 				       u32 clock,
 				       bool strobe_mode,
 				       struct atom_clock_dividers *dividers);
-int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
-			      u8 voltage_type,
-			      u8 *svd_gpio_id, u8 *svc_gpio_id);
 
 int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
 				   uint32_t table,
diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index 2e295facd086..cdf724dcf832 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -404,6 +404,7 @@ struct amd_pm_funcs {
 	int (*get_dpm_clock_table)(void *handle,
 				   struct dpm_clocks *clock_table);
 	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t *size);
+	int (*change_power_state)(void *handle);
 };
 
 struct metrics_table_header {
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index ecaf0081bc31..c6801d10cde6 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -34,113 +34,9 @@
 
 #define WIDTH_4K 3840
 
-#define amdgpu_dpm_pre_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_post_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_display_configuration_changed(adev) \
-		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_print_power_state(adev, ps) \
-		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
-
-#define amdgpu_dpm_vblank_too_short(adev) \
-		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
-
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
 
-#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
-		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
-
-void amdgpu_dpm_print_class_info(u32 class, u32 class2)
-{
-	const char *s;
-
-	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
-	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
-	default:
-		s = "none";
-		break;
-	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
-		s = "battery";
-		break;
-	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
-		s = "balanced";
-		break;
-	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
-		s = "performance";
-		break;
-	}
-	printk("\tui class: %s\n", s);
-	printk("\tinternal class:");
-	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
-	    (class2 == 0))
-		pr_cont(" none");
-	else {
-		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
-			pr_cont(" boot");
-		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
-			pr_cont(" thermal");
-		if (class & ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
-			pr_cont(" limited_pwr");
-		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
-			pr_cont(" rest");
-		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
-			pr_cont(" forced");
-		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
-			pr_cont(" 3d_perf");
-		if (class & ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
-			pr_cont(" ovrdrv");
-		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
-			pr_cont(" uvd");
-		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
-			pr_cont(" 3d_low");
-		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
-			pr_cont(" acpi");
-		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
-			pr_cont(" uvd_hd2");
-		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
-			pr_cont(" uvd_hd");
-		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
-			pr_cont(" uvd_sd");
-		if (class2 & ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
-			pr_cont(" limited_pwr2");
-		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
-			pr_cont(" ulv");
-		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
-			pr_cont(" uvd_mvc");
-	}
-	pr_cont("\n");
-}
-
-void amdgpu_dpm_print_cap_info(u32 caps)
-{
-	printk("\tcaps:");
-	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
-		pr_cont(" single_disp");
-	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
-		pr_cont(" video");
-	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
-		pr_cont(" no_dc");
-	pr_cont("\n");
-}
-
-void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
-				struct amdgpu_ps *rps)
-{
-	printk("\tstatus:");
-	if (rps == adev->pm.dpm.current_ps)
-		pr_cont(" c");
-	if (rps == adev->pm.dpm.requested_ps)
-		pr_cont(" r");
-	if (rps == adev->pm.dpm.boot_ps)
-		pr_cont(" b");
-	pr_cont("\n");
-}
-
 static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
 {
 	struct drm_device *ddev = adev_to_drm(adev);
@@ -161,7 +57,6 @@ static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
 	}
 }
 
-
 u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
 {
 	struct drm_device *dev = adev_to_drm(adev);
@@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
 	return vrefresh;
 }
 
-union power_info {
-	struct _ATOM_POWERPLAY_INFO info;
-	struct _ATOM_POWERPLAY_INFO_V2 info_2;
-	struct _ATOM_POWERPLAY_INFO_V3 info_3;
-	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
-	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
-	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
-	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
-	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
-};
-
-union fan_info {
-	struct _ATOM_PPLIB_FANTABLE fan;
-	struct _ATOM_PPLIB_FANTABLE2 fan2;
-	struct _ATOM_PPLIB_FANTABLE3 fan3;
-};
-
-static int amdgpu_parse_clk_voltage_dep_table(struct amdgpu_clock_voltage_dependency_table *amdgpu_table,
-					      ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
-{
-	u32 size = atom_table->ucNumEntries *
-		sizeof(struct amdgpu_clock_voltage_dependency_entry);
-	int i;
-	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
-
-	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
-	if (!amdgpu_table->entries)
-		return -ENOMEM;
-
-	entry = &atom_table->entries[0];
-	for (i = 0; i < atom_table->ucNumEntries; i++) {
-		amdgpu_table->entries[i].clk = le16_to_cpu(entry->usClockLow) |
-			(entry->ucClockHigh << 16);
-		amdgpu_table->entries[i].v = le16_to_cpu(entry->usVoltage);
-		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record *)
-			((u8 *)entry + sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
-	}
-	amdgpu_table->count = atom_table->ucNumEntries;
-
-	return 0;
-}
-
-int amdgpu_get_platform_caps(struct amdgpu_device *adev)
-{
-	struct amdgpu_mode_info *mode_info = &adev->mode_info;
-	union power_info *power_info;
-	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
-	u16 data_offset;
-	u8 frev, crev;
-
-	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
-				   &frev, &crev, &data_offset))
-		return -EINVAL;
-	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
-
-	adev->pm.dpm.platform_caps = le32_to_cpu(power_info->pplib.ulPlatformCaps);
-	adev->pm.dpm.backbias_response_time = le16_to_cpu(power_info->pplib.usBackbiasTime);
-	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info->pplib.usVoltageTime);
-
-	return 0;
-}
-
-/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
-#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
-
-int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
-{
-	struct amdgpu_mode_info *mode_info = &adev->mode_info;
-	union power_info *power_info;
-	union fan_info *fan_info;
-	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
-	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
-	u16 data_offset;
-	u8 frev, crev;
-	int ret, i;
-
-	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
-				   &frev, &crev, &data_offset))
-		return -EINVAL;
-	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
-
-	/* fan table */
-	if (le16_to_cpu(power_info->pplib.usTableSize) >=
-	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
-		if (power_info->pplib3.usFanTableOffset) {
-			fan_info = (union fan_info *)(mode_info->atom_context->bios + data_offset +
-						      le16_to_cpu(power_info->pplib3.usFanTableOffset));
-			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
-			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info->fan.usTMin);
-			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info->fan.usTMed);
-			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info->fan.usTHigh);
-			adev->pm.dpm.fan.pwm_min = le16_to_cpu(fan_info->fan.usPWMMin);
-			adev->pm.dpm.fan.pwm_med = le16_to_cpu(fan_info->fan.usPWMMed);
-			adev->pm.dpm.fan.pwm_high = le16_to_cpu(fan_info->fan.usPWMHigh);
-			if (fan_info->fan.ucFanTableFormat >= 2)
-				adev->pm.dpm.fan.t_max = le16_to_cpu(fan_info->fan2.usTMax);
-			else
-				adev->pm.dpm.fan.t_max = 10900;
-			adev->pm.dpm.fan.cycle_delay = 100000;
-			if (fan_info->fan.ucFanTableFormat >= 3) {
-				adev->pm.dpm.fan.control_mode = fan_info->fan3.ucFanControlMode;
-				adev->pm.dpm.fan.default_max_fan_pwm =
-					le16_to_cpu(fan_info->fan3.usFanPWMMax);
-				adev->pm.dpm.fan.default_fan_output_sensitivity = 4836;
-				adev->pm.dpm.fan.fan_output_sensitivity =
-					le16_to_cpu(fan_info->fan3.usFanOutputSensitivity);
-			}
-			adev->pm.dpm.fan.ucode_fan_control = true;
-		}
-	}
-
-	/* clock dependancy tables, shedding tables */
-	if (le16_to_cpu(power_info->pplib.usTableSize) >=
-	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
-		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
-			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
-			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
-								 dep_table);
-			if (ret) {
-				amdgpu_free_extended_power_table(adev);
-				return ret;
-			}
-		}
-		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
-			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
-			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
-								 dep_table);
-			if (ret) {
-				amdgpu_free_extended_power_table(adev);
-				return ret;
-			}
-		}
-		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
-			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
-			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
-								 dep_table);
-			if (ret) {
-				amdgpu_free_extended_power_table(adev);
-				return ret;
-			}
-		}
-		if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
-			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
-			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
-								 dep_table);
-			if (ret) {
-				amdgpu_free_extended_power_table(adev);
-				return ret;
-			}
-		}
-		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
-			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
-				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usMaxClockVoltageOnDCOffset));
-			if (clk_v->ucNumEntries) {
-				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
-					le16_to_cpu(clk_v->entries[0].usSclkLow) |
-					(clk_v->entries[0].ucSclkHigh << 16);
-				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
-					le16_to_cpu(clk_v->entries[0].usMclkLow) |
-					(clk_v->entries[0].ucMclkHigh << 16);
-				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
-					le16_to_cpu(clk_v->entries[0].usVddc);
-				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
-					le16_to_cpu(clk_v->entries[0].usVddci);
-			}
-		}
-		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset) {
-			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
-				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib4.usVddcPhaseShedLimitsTableOffset));
-			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
-
-			adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries =
-				kcalloc(psl->ucNumEntries,
-					sizeof(struct amdgpu_phase_shedding_limits_entry),
-					GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-
-			entry = &psl->entries[0];
-			for (i = 0; i < psl->ucNumEntries; i++) {
-				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
-					le16_to_cpu(entry->usSclkLow) | (entry->ucSclkHigh << 16);
-				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
-					le16_to_cpu(entry->usMclkLow) | (entry->ucMclkHigh << 16);
-				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
-					le16_to_cpu(entry->usVoltage);
-				entry = (ATOM_PPLIB_PhaseSheddingLimits_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
-			}
-			adev->pm.dpm.dyn_state.phase_shedding_limits_table.count =
-				psl->ucNumEntries;
-		}
-	}
-
-	/* cac data */
-	if (le16_to_cpu(power_info->pplib.usTableSize) >=
-	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
-		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info->pplib5.ulTDPLimit);
-		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info->pplib5.ulNearTDPLimit);
-		adev->pm.dpm.near_tdp_limit_adjusted = adev->pm.dpm.near_tdp_limit;
-		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info->pplib5.usTDPODLimit);
-		if (adev->pm.dpm.tdp_od_limit)
-			adev->pm.dpm.power_control = true;
-		else
-			adev->pm.dpm.power_control = false;
-		adev->pm.dpm.tdp_adjustment = 0;
-		adev->pm.dpm.sq_ramping_threshold = le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
-		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info->pplib5.ulCACLeakage);
-		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info->pplib5.usLoadLineSlope);
-		if (power_info->pplib5.usCACLeakageTableOffset) {
-			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
-				(ATOM_PPLIB_CAC_Leakage_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(power_info->pplib5.usCACLeakageTableOffset));
-			ATOM_PPLIB_CAC_Leakage_Record *entry;
-			u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
-			adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			entry = &cac_table->entries[0];
-			for (i = 0; i < cac_table->ucNumEntries; i++) {
-				if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
-					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
-						le16_to_cpu(entry->usVddc1);
-					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
-						le16_to_cpu(entry->usVddc2);
-					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
-						le16_to_cpu(entry->usVddc3);
-				} else {
-					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
-						le16_to_cpu(entry->usVddc);
-					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
-						le32_to_cpu(entry->ulLeakageValue);
-				}
-				entry = (ATOM_PPLIB_CAC_Leakage_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_CAC_Leakage_Record));
-			}
-			adev->pm.dpm.dyn_state.cac_leakage_table.count = cac_table->ucNumEntries;
-		}
-	}
-
-	/* ext tables */
-	if (le16_to_cpu(power_info->pplib.usTableSize) >=
-	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
-		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr = (ATOM_PPLIB_EXTENDEDHEADER *)
-			(mode_info->atom_context->bios + data_offset +
-			 le16_to_cpu(power_info->pplib3.usExtendendedHeaderOffset));
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
-			ext_hdr->usVCETableOffset) {
-			VCEClockInfoArray *array = (VCEClockInfoArray *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1);
-			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *limits =
-				(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
-				 1 + array->ucNumEntries * sizeof(VCEClockInfo));
-			ATOM_PPLIB_VCE_State_Table *states =
-				(ATOM_PPLIB_VCE_State_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
-				 1 + (array->ucNumEntries * sizeof (VCEClockInfo)) +
-				 1 + (limits->numEntries * sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
-			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *entry;
-			ATOM_PPLIB_VCE_State_Record *state_entry;
-			VCEClockInfo *vce_clk;
-			u32 size = limits->numEntries *
-				sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
-			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
-				kzalloc(size, GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
-				limits->numEntries;
-			entry = &limits->entries[0];
-			state_entry = &states->entries[0];
-			for (i = 0; i < limits->numEntries; i++) {
-				vce_clk = (VCEClockInfo *)
-					((u8 *)&array->entries[0] +
-					 (entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
-				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk =
-					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
-				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk =
-					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
-				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
-					le16_to_cpu(entry->usVoltage);
-				entry = (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
-			}
-			adev->pm.dpm.num_of_vce_states =
-					states->numEntries > AMD_MAX_VCE_LEVELS ?
-					AMD_MAX_VCE_LEVELS : states->numEntries;
-			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
-				vce_clk = (VCEClockInfo *)
-					((u8 *)&array->entries[0] +
-					 (state_entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
-				adev->pm.dpm.vce_states[i].evclk =
-					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
-				adev->pm.dpm.vce_states[i].ecclk =
-					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
-				adev->pm.dpm.vce_states[i].clk_idx =
-					state_entry->ucClockInfoIndex & 0x3f;
-				adev->pm.dpm.vce_states[i].pstate =
-					(state_entry->ucClockInfoIndex & 0xc0) >> 6;
-				state_entry = (ATOM_PPLIB_VCE_State_Record *)
-					((u8 *)state_entry + sizeof(ATOM_PPLIB_VCE_State_Record));
-			}
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
-			ext_hdr->usUVDTableOffset) {
-			UVDClockInfoArray *array = (UVDClockInfoArray *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1);
-			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *limits =
-				(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1 +
-				 1 + (array->ucNumEntries * sizeof (UVDClockInfo)));
-			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *entry;
-			u32 size = limits->numEntries *
-				sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
-			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
-				kzalloc(size, GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
-				limits->numEntries;
-			entry = &limits->entries[0];
-			for (i = 0; i < limits->numEntries; i++) {
-				UVDClockInfo *uvd_clk = (UVDClockInfo *)
-					((u8 *)&array->entries[0] +
-					 (entry->ucUVDClockInfoIndex * sizeof(UVDClockInfo)));
-				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
-					le16_to_cpu(uvd_clk->usVClkLow) | (uvd_clk->ucVClkHigh << 16);
-				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
-					le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16);
-				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
-					le16_to_cpu(entry->usVoltage);
-				entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
-			}
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
-			ext_hdr->usSAMUTableOffset) {
-			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
-				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usSAMUTableOffset) + 1);
-			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
-			u32 size = limits->numEntries *
-				sizeof(struct amdgpu_clock_voltage_dependency_entry);
-			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
-				kzalloc(size, GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
-				limits->numEntries;
-			entry = &limits->entries[0];
-			for (i = 0; i < limits->numEntries; i++) {
-				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
-					le16_to_cpu(entry->usSAMClockLow) | (entry->ucSAMClockHigh << 16);
-				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
-					le16_to_cpu(entry->usVoltage);
-				entry = (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
-			}
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
-		    ext_hdr->usPPMTableOffset) {
-			ATOM_PPLIB_PPM_Table *ppm = (ATOM_PPLIB_PPM_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usPPMTableOffset));
-			adev->pm.dpm.dyn_state.ppm_table =
-				kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.ppm_table) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
-			adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
-				le16_to_cpu(ppm->usCpuCoreNumber);
-			adev->pm.dpm.dyn_state.ppm_table->platform_tdp =
-				le32_to_cpu(ppm->ulPlatformTDP);
-			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdp =
-				le32_to_cpu(ppm->ulSmallACPlatformTDP);
-			adev->pm.dpm.dyn_state.ppm_table->platform_tdc =
-				le32_to_cpu(ppm->ulPlatformTDC);
-			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdc =
-				le32_to_cpu(ppm->ulSmallACPlatformTDC);
-			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
-				le32_to_cpu(ppm->ulApuTDP);
-			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
-				le32_to_cpu(ppm->ulDGpuTDP);
-			adev->pm.dpm.dyn_state.ppm_table->dgpu_ulv_power =
-				le32_to_cpu(ppm->ulDGpuUlvPower);
-			adev->pm.dpm.dyn_state.ppm_table->tj_max =
-				le32_to_cpu(ppm->ulTjmax);
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
-			ext_hdr->usACPTableOffset) {
-			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
-				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usACPTableOffset) + 1);
-			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
-			u32 size = limits->numEntries *
-				sizeof(struct amdgpu_clock_voltage_dependency_entry);
-			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
-				kzalloc(size, GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
-				limits->numEntries;
-			entry = &limits->entries[0];
-			for (i = 0; i < limits->numEntries; i++) {
-				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
-					le16_to_cpu(entry->usACPClockLow) | (entry->ucACPClockHigh << 16);
-				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
-					le16_to_cpu(entry->usVoltage);
-				entry = (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
-					((u8 *)entry + sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
-			}
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
-			ext_hdr->usPowerTuneTableOffset) {
-			u8 rev = *(u8 *)(mode_info->atom_context->bios + data_offset +
-					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
-			ATOM_PowerTune_Table *pt;
-			adev->pm.dpm.dyn_state.cac_tdp_table =
-				kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
-			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
-				amdgpu_free_extended_power_table(adev);
-				return -ENOMEM;
-			}
-			if (rev > 0) {
-				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
-					(mode_info->atom_context->bios + data_offset +
-					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
-				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit =
-					ppt->usMaximumPowerDeliveryLimit;
-				pt = &ppt->power_tune_table;
-			} else {
-				ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
-					(mode_info->atom_context->bios + data_offset +
-					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
-				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 255;
-				pt = &ppt->power_tune_table;
-			}
-			adev->pm.dpm.dyn_state.cac_tdp_table->tdp = le16_to_cpu(pt->usTDP);
-			adev->pm.dpm.dyn_state.cac_tdp_table->configurable_tdp =
-				le16_to_cpu(pt->usConfigurableTDP);
-			adev->pm.dpm.dyn_state.cac_tdp_table->tdc = le16_to_cpu(pt->usTDC);
-			adev->pm.dpm.dyn_state.cac_tdp_table->battery_power_limit =
-				le16_to_cpu(pt->usBatteryPowerLimit);
-			adev->pm.dpm.dyn_state.cac_tdp_table->small_power_limit =
-				le16_to_cpu(pt->usSmallPowerLimit);
-			adev->pm.dpm.dyn_state.cac_tdp_table->low_cac_leakage =
-				le16_to_cpu(pt->usLowCACLeakage);
-			adev->pm.dpm.dyn_state.cac_tdp_table->high_cac_leakage =
-				le16_to_cpu(pt->usHighCACLeakage);
-		}
-		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
-				ext_hdr->usSclkVddgfxTableOffset) {
-			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
-				(mode_info->atom_context->bios + data_offset +
-				 le16_to_cpu(ext_hdr->usSclkVddgfxTableOffset));
-			ret = amdgpu_parse_clk_voltage_dep_table(
-					&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
-					dep_table);
-			if (ret) {
-				kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
-				return ret;
-			}
-		}
-	}
-
-	return 0;
-}
-
-void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
-{
-	struct amdgpu_dpm_dynamic_state *dyn_state = &adev->pm.dpm.dyn_state;
-
-	kfree(dyn_state->vddc_dependency_on_sclk.entries);
-	kfree(dyn_state->vddci_dependency_on_mclk.entries);
-	kfree(dyn_state->vddc_dependency_on_mclk.entries);
-	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
-	kfree(dyn_state->cac_leakage_table.entries);
-	kfree(dyn_state->phase_shedding_limits_table.entries);
-	kfree(dyn_state->ppm_table);
-	kfree(dyn_state->cac_tdp_table);
-	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
-	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
-	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
-	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
-	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
-}
-
-static const char *pp_lib_thermal_controller_names[] = {
-	"NONE",
-	"lm63",
-	"adm1032",
-	"adm1030",
-	"max6649",
-	"lm64",
-	"f75375",
-	"RV6xx",
-	"RV770",
-	"adt7473",
-	"NONE",
-	"External GPIO",
-	"Evergreen",
-	"emc2103",
-	"Sumo",
-	"Northern Islands",
-	"Southern Islands",
-	"lm96163",
-	"Sea Islands",
-	"Kaveri/Kabini",
-};
-
-void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
-{
-	struct amdgpu_mode_info *mode_info = &adev->mode_info;
-	ATOM_PPLIB_POWERPLAYTABLE *power_table;
-	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
-	ATOM_PPLIB_THERMALCONTROLLER *controller;
-	struct amdgpu_i2c_bus_rec i2c_bus;
-	u16 data_offset;
-	u8 frev, crev;
-
-	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
-				   &frev, &crev, &data_offset))
-		return;
-	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
-		(mode_info->atom_context->bios + data_offset);
-	controller = &power_table->sThermalController;
-
-	/* add the i2c bus for thermal/fan chip */
-	if (controller->ucType > 0) {
-		if (controller->ucFanParameters & ATOM_PP_FANPARAMETERS_NOFAN)
-			adev->pm.no_fan = true;
-		adev->pm.fan_pulses_per_revolution =
-			controller->ucFanParameters & ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
-		if (adev->pm.fan_pulses_per_revolution) {
-			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
-			adev->pm.fan_max_rpm = controller->ucFanMaxRPM;
-		}
-		if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_RV6XX;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV770) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_RV770;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_EVERGREEN;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SUMO) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_SUMO;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_NISLANDS) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SISLANDS) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_CISLANDS) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_KAVERI) {
-			DRM_INFO("Internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
-		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
-			DRM_INFO("External GPIO thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
-		} else if (controller->ucType ==
-			   ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
-			DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
-		} else if (controller->ucType ==
-			   ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
-			DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
-		} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
-			DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
-				 pp_lib_thermal_controller_names[controller->ucType],
-				 controller->ucI2cAddress >> 1,
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
-			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev, controller->ucI2cLine);
-			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev, &i2c_bus);
-			if (adev->pm.i2c_bus) {
-				struct i2c_board_info info = { };
-				const char *name = pp_lib_thermal_controller_names[controller->ucType];
-				info.addr = controller->ucI2cAddress >> 1;
-				strlcpy(info.type, name, sizeof(info.type));
-				i2c_new_client_device(&adev->pm.i2c_bus->adapter, &info);
-			}
-		} else {
-			DRM_INFO("Unknown thermal controller type %d at 0x%02x %s fan control\n",
-				 controller->ucType,
-				 controller->ucI2cAddress >> 1,
-				 (controller->ucFanParameters &
-				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-		}
-	}
-}
-
-struct amd_vce_state*
-amdgpu_get_vce_clock_state(void *handle, u32 idx)
-{
-	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
-	if (idx < adev->pm.dpm.num_of_vce_states)
-		return &adev->pm.dpm.vce_states[idx];
-
-	return NULL;
-}
-
 int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
@@ -1243,211 +465,6 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
 	amdgpu_pm_compute_clocks(adev);
 }
 
-static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
-						     enum amd_pm_state_type dpm_state)
-{
-	int i;
-	struct amdgpu_ps *ps;
-	u32 ui_class;
-	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
-		true : false;
-
-	/* check if the vblank period is too short to adjust the mclk */
-	if (single_display && adev->powerplay.pp_funcs->vblank_too_short) {
-		if (amdgpu_dpm_vblank_too_short(adev))
-			single_display = false;
-	}
-
-	/* certain older asics have a separare 3D performance state,
-	 * so try that first if the user selected performance
-	 */
-	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
-		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
-	/* balanced states don't exist at the moment */
-	if (dpm_state == POWER_STATE_TYPE_BALANCED)
-		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
-
-restart_search:
-	/* Pick the best power state based on current conditions */
-	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
-		ps = &adev->pm.dpm.ps[i];
-		ui_class = ps->class & ATOM_PPLIB_CLASSIFICATION_UI_MASK;
-		switch (dpm_state) {
-		/* user states */
-		case POWER_STATE_TYPE_BATTERY:
-			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
-				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
-					if (single_display)
-						return ps;
-				} else
-					return ps;
-			}
-			break;
-		case POWER_STATE_TYPE_BALANCED:
-			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
-				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
-					if (single_display)
-						return ps;
-				} else
-					return ps;
-			}
-			break;
-		case POWER_STATE_TYPE_PERFORMANCE:
-			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
-				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
-					if (single_display)
-						return ps;
-				} else
-					return ps;
-			}
-			break;
-		/* internal states */
-		case POWER_STATE_TYPE_INTERNAL_UVD:
-			if (adev->pm.dpm.uvd_ps)
-				return adev->pm.dpm.uvd_ps;
-			else
-				break;
-		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
-			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_BOOT:
-			return adev->pm.dpm.boot_ps;
-		case POWER_STATE_TYPE_INTERNAL_THERMAL:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_ACPI:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_ULV:
-			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
-				return ps;
-			break;
-		case POWER_STATE_TYPE_INTERNAL_3DPERF:
-			if (ps->class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
-				return ps;
-			break;
-		default:
-			break;
-		}
-	}
-	/* use a fallback state if we didn't match */
-	switch (dpm_state) {
-	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
-		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
-		goto restart_search;
-	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
-	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
-	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
-		if (adev->pm.dpm.uvd_ps) {
-			return adev->pm.dpm.uvd_ps;
-		} else {
-			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
-			goto restart_search;
-		}
-	case POWER_STATE_TYPE_INTERNAL_THERMAL:
-		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
-		goto restart_search;
-	case POWER_STATE_TYPE_INTERNAL_ACPI:
-		dpm_state = POWER_STATE_TYPE_BATTERY;
-		goto restart_search;
-	case POWER_STATE_TYPE_BATTERY:
-	case POWER_STATE_TYPE_BALANCED:
-	case POWER_STATE_TYPE_INTERNAL_3DPERF:
-		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
-		goto restart_search;
-	default:
-		break;
-	}
-
-	return NULL;
-}
-
-static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
-{
-	struct amdgpu_ps *ps;
-	enum amd_pm_state_type dpm_state;
-	int ret;
-	bool equal = false;
-
-	/* if dpm init failed */
-	if (!adev->pm.dpm_enabled)
-		return;
-
-	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
-		/* add other state override checks here */
-		if ((!adev->pm.dpm.thermal_active) &&
-		    (!adev->pm.dpm.uvd_active))
-			adev->pm.dpm.state = adev->pm.dpm.user_state;
-	}
-	dpm_state = adev->pm.dpm.state;
-
-	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
-	if (ps)
-		adev->pm.dpm.requested_ps = ps;
-	else
-		return;
-
-	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs->print_power_state) {
-		printk("switching from power state:\n");
-		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.current_ps);
-		printk("switching to power state:\n");
-		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.requested_ps);
-	}
-
-	/* update whether vce is active */
-	ps->vce_active = adev->pm.dpm.vce_active;
-	if (adev->powerplay.pp_funcs->display_configuration_changed)
-		amdgpu_dpm_display_configuration_changed(adev);
-
-	ret = amdgpu_dpm_pre_set_power_state(adev);
-	if (ret)
-		return;
-
-	if (adev->powerplay.pp_funcs->check_state_equal) {
-		if (0 != amdgpu_dpm_check_state_equal(adev, adev->pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
-			equal = false;
-	}
-
-	if (equal)
-		return;
-
-	if (adev->powerplay.pp_funcs->set_power_state)
-		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
-
-	amdgpu_dpm_post_set_power_state(adev);
-
-	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
-	adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
-
-	if (adev->powerplay.pp_funcs->force_performance_level) {
-		if (adev->pm.dpm.thermal_active) {
-			enum amd_dpm_forced_level level = adev->pm.dpm.forced_level;
-			/* force low perf level for thermal */
-			amdgpu_dpm_force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
-			/* save the user's level */
-			adev->pm.dpm.forced_level = level;
-		} else {
-			/* otherwise, user selected level */
-			amdgpu_dpm_force_performance_level(adev, adev->pm.dpm.forced_level);
-		}
-	}
-}
-
 void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 {
 	int i = 0;
@@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 			amdgpu_fence_wait_empty(ring);
 	}
 
-	if (adev->powerplay.pp_funcs->dispatch_tasks) {
+	if ((adev->family == AMDGPU_FAMILY_SI) ||
+	     (adev->family == AMDGPU_FAMILY_KV)) {
+		amdgpu_dpm_get_active_displays(adev);
+		adev->powerplay.pp_funcs->change_power_state(adev->powerplay.pp_handle);
+	} else {
 		if (!amdgpu_device_has_dc_support(adev)) {
-			mutex_lock(&adev->pm.mutex);
 			amdgpu_dpm_get_active_displays(adev);
 			adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count;
 			adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev);
@@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 				adev->powerplay.pp_funcs->display_configuration_change(
 							adev->powerplay.pp_handle,
 							&adev->pm.pm_display_cfg);
-			mutex_unlock(&adev->pm.mutex);
 		}
 		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
-	} else {
-		mutex_lock(&adev->pm.mutex);
-		amdgpu_dpm_get_active_displays(adev);
-		amdgpu_dpm_change_power_state_locked(adev);
-		mutex_unlock(&adev->pm.mutex);
 	}
 }
 
@@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
 	}
 }
 
-void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
-{
-	int i;
-
-	if (adev->powerplay.pp_funcs->print_power_state == NULL)
-		return;
-
-	for (i = 0; i < adev->pm.dpm.num_ps; i++)
-		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
-
-}
-
 void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
 {
 	int ret = 0;
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 01120b302590..295d2902aef7 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -366,24 +366,10 @@ enum amdgpu_display_gap
     AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
 };
 
-void amdgpu_dpm_print_class_info(u32 class, u32 class2);
-void amdgpu_dpm_print_cap_info(u32 caps);
-void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
-				struct amdgpu_ps *rps);
 u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
 int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
 			   void *data, uint32_t *size);
 
-int amdgpu_get_platform_caps(struct amdgpu_device *adev);
-
-int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
-void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
-
-void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
-
-struct amd_vce_state*
-amdgpu_get_vce_clock_state(void *handle, u32 idx);
-
 int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev,
 				      uint32_t block_type, bool gate);
 
@@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev);
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
-void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
 int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version);
 int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable);
 int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size);
diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile b/drivers/gpu/drm/amd/pm/powerplay/Makefile
index 0fb114adc79f..614d8b6a58ad 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
+++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
@@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
 
 include $(AMD_POWERPLAY)
 
-POWER_MGR-y = amd_powerplay.o
+POWER_MGR-y = amd_powerplay.o legacy_dpm.o
 
 POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
 
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
index 380a5336c74f..90f4c65659e2 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
@@ -36,6 +36,7 @@
 
 #include "gca/gfx_7_2_d.h"
 #include "gca/gfx_7_2_sh_mask.h"
+#include "legacy_dpm.h"
 
 #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
 #define KV_MINIMUM_ENGINE_CLOCK         800
@@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs kv_dpm_funcs = {
 	.get_vce_clock_state = amdgpu_get_vce_clock_state,
 	.check_state_equal = kv_check_state_equal,
 	.read_sensor = &kv_dpm_read_sensor,
+	.change_power_state = amdgpu_dpm_change_power_state_locked,
 };
 
 static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
new file mode 100644
index 000000000000..9427c1026e1d
--- /dev/null
+++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
@@ -0,0 +1,1453 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "amdgpu.h"
+#include "amdgpu_atombios.h"
+#include "amdgpu_i2c.h"
+#include "atom.h"
+#include "amd_pcie.h"
+#include "legacy_dpm.h"
+
+#define amdgpu_dpm_pre_set_power_state(adev) \
+		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_post_set_power_state(adev) \
+		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_display_configuration_changed(adev) \
+		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_print_power_state(adev, ps) \
+		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
+
+#define amdgpu_dpm_vblank_too_short(adev) \
+		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
+
+#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
+		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
+
+int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
+					    u32 clock,
+					    bool strobe_mode,
+					    struct atom_mpll_param *mpll_param)
+{
+	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
+	int index = GetIndexIntoMasterTable(COMMAND, ComputeMemoryClockParam);
+	u8 frev, crev;
+
+	memset(&args, 0, sizeof(args));
+	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
+
+	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
+		return -EINVAL;
+
+	switch (frev) {
+	case 2:
+		switch (crev) {
+		case 1:
+			/* SI */
+			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
+			args.ucInputFlag = 0;
+			if (strobe_mode)
+				args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
+
+			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
+
+			mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
+			mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
+			mpll_param->post_div = args.ucPostDiv;
+			mpll_param->dll_speed = args.ucDllSpeed;
+			mpll_param->bwcntl = args.ucBWCntl;
+			mpll_param->vco_mode =
+				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
+			mpll_param->yclk_sel =
+				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
+			mpll_param->qdr =
+				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
+			mpll_param->half_rate =
+				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
+					     u32 eng_clock, u32 mem_clock)
+{
+	SET_ENGINE_CLOCK_PS_ALLOCATION args;
+	int index = GetIndexIntoMasterTable(COMMAND, DynamicMemorySettings);
+	u32 tmp;
+
+	memset(&args, 0, sizeof(args));
+
+	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
+	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
+
+	args.ulTargetEngineClock = cpu_to_le32(tmp);
+	if (mem_clock)
+		args.sReserved.ulClock = cpu_to_le32(mem_clock & SET_CLOCK_FREQ_MASK);
+
+	amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
+}
+
+union firmware_info {
+	ATOM_FIRMWARE_INFO info;
+	ATOM_FIRMWARE_INFO_V1_2 info_12;
+	ATOM_FIRMWARE_INFO_V1_3 info_13;
+	ATOM_FIRMWARE_INFO_V1_4 info_14;
+	ATOM_FIRMWARE_INFO_V2_1 info_21;
+	ATOM_FIRMWARE_INFO_V2_2 info_22;
+};
+
+void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
+					  u16 *vddc, u16 *vddci, u16 *mvdd)
+{
+	struct amdgpu_mode_info *mode_info = &adev->mode_info;
+	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
+	u8 frev, crev;
+	u16 data_offset;
+	union firmware_info *firmware_info;
+
+	*vddc = 0;
+	*vddci = 0;
+	*mvdd = 0;
+
+	if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
+				   &frev, &crev, &data_offset)) {
+		firmware_info =
+			(union firmware_info *)(mode_info->atom_context->bios +
+						data_offset);
+		*vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage);
+		if ((frev == 2) && (crev >= 2)) {
+			*vddci = le16_to_cpu(firmware_info->info_22.usBootUpVDDCIVoltage);
+			*mvdd = le16_to_cpu(firmware_info->info_22.usBootUpMVDDCVoltage);
+		}
+	}
+}
+
+union set_voltage {
+	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
+	struct _SET_VOLTAGE_PARAMETERS v1;
+	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
+	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
+};
+
+int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+			     u16 voltage_id, u16 *voltage)
+{
+	union set_voltage args;
+	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
+	u8 frev, crev;
+
+	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
+		return -EINVAL;
+
+	switch (crev) {
+	case 1:
+		return -EINVAL;
+	case 2:
+		args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE;
+		args.v2.ucVoltageMode = 0;
+		args.v2.usVoltageLevel = 0;
+
+		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
+
+		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
+		break;
+	case 3:
+		args.v3.ucVoltageType = voltage_type;
+		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
+		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
+
+		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
+
+		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
+		break;
+	default:
+		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
+						      u16 *voltage,
+						      u16 leakage_idx)
+{
+	return amdgpu_atombios_get_max_vddc(adev, VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
+}
+
+union voltage_object_info {
+	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
+	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
+	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
+};
+
+union voltage_object {
+	struct _ATOM_VOLTAGE_OBJECT v1;
+	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
+	union _ATOM_VOLTAGE_OBJECT_V3 v3;
+};
+
+static ATOM_VOLTAGE_OBJECT_V3 *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_INFO_V3_1 *v3,
+									u8 voltage_type, u8 voltage_mode)
+{
+	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
+	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1, asVoltageObj[0]);
+	u8 *start = (u8 *)v3;
+
+	while (offset < size) {
+		ATOM_VOLTAGE_OBJECT_V3 *vo = (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
+		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType == voltage_type) &&
+		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode == voltage_mode))
+			return vo;
+		offset += le16_to_cpu(vo->asGpioVoltageObj.sHeader.usSize);
+	}
+	return NULL;
+}
+
+int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
+			      u8 voltage_type,
+			      u8 *svd_gpio_id, u8 *svc_gpio_id)
+{
+	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
+	u8 frev, crev;
+	u16 data_offset, size;
+	union voltage_object_info *voltage_info;
+	union voltage_object *voltage_object = NULL;
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
+				   &frev, &crev, &data_offset)) {
+		voltage_info = (union voltage_object_info *)
+			(adev->mode_info.atom_context->bios + data_offset);
+
+		switch (frev) {
+		case 3:
+			switch (crev) {
+			case 1:
+				voltage_object = (union voltage_object *)
+					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
+								      voltage_type,
+								      VOLTAGE_OBJ_SVID2);
+				if (voltage_object) {
+					*svd_gpio_id = voltage_object->v3.asSVID2Obj.ucSVDGpioId;
+					*svc_gpio_id = voltage_object->v3.asSVID2Obj.ucSVCGpioId;
+				} else {
+					return -EINVAL;
+				}
+				break;
+			default:
+				DRM_ERROR("unknown voltage object table\n");
+				return -EINVAL;
+			}
+			break;
+		default:
+			DRM_ERROR("unknown voltage object table\n");
+			return -EINVAL;
+		}
+
+	}
+	return 0;
+}
+
+bool
+amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
+				u8 voltage_type, u8 voltage_mode)
+{
+	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
+	u8 frev, crev;
+	u16 data_offset, size;
+	union voltage_object_info *voltage_info;
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
+				   &frev, &crev, &data_offset)) {
+		voltage_info = (union voltage_object_info *)
+			(adev->mode_info.atom_context->bios + data_offset);
+
+		switch (frev) {
+		case 3:
+			switch (crev) {
+			case 1:
+				if (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
+								  voltage_type, voltage_mode))
+					return true;
+				break;
+			default:
+				DRM_ERROR("unknown voltage object table\n");
+				return false;
+			}
+			break;
+		default:
+			DRM_ERROR("unknown voltage object table\n");
+			return false;
+		}
+
+	}
+	return false;
+}
+
+int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
+				      u8 voltage_type, u8 voltage_mode,
+				      struct atom_voltage_table *voltage_table)
+{
+	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
+	u8 frev, crev;
+	u16 data_offset, size;
+	int i;
+	union voltage_object_info *voltage_info;
+	union voltage_object *voltage_object = NULL;
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
+				   &frev, &crev, &data_offset)) {
+		voltage_info = (union voltage_object_info *)
+			(adev->mode_info.atom_context->bios + data_offset);
+
+		switch (frev) {
+		case 3:
+			switch (crev) {
+			case 1:
+				voltage_object = (union voltage_object *)
+					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
+								      voltage_type, voltage_mode);
+				if (voltage_object) {
+					ATOM_GPIO_VOLTAGE_OBJECT_V3 *gpio =
+						&voltage_object->v3.asGpioVoltageObj;
+					VOLTAGE_LUT_ENTRY_V2 *lut;
+					if (gpio->ucGpioEntryNum > MAX_VOLTAGE_ENTRIES)
+						return -EINVAL;
+					lut = &gpio->asVolGpioLut[0];
+					for (i = 0; i < gpio->ucGpioEntryNum; i++) {
+						voltage_table->entries[i].value =
+							le16_to_cpu(lut->usVoltageValue);
+						voltage_table->entries[i].smio_low =
+							le32_to_cpu(lut->ulVoltageId);
+						lut = (VOLTAGE_LUT_ENTRY_V2 *)
+							((u8 *)lut + sizeof(VOLTAGE_LUT_ENTRY_V2));
+					}
+					voltage_table->mask_low = le32_to_cpu(gpio->ulGpioMaskVal);
+					voltage_table->count = gpio->ucGpioEntryNum;
+					voltage_table->phase_delay = gpio->ucPhaseDelay;
+					return 0;
+				}
+				break;
+			default:
+				DRM_ERROR("unknown voltage object table\n");
+				return -EINVAL;
+			}
+			break;
+		default:
+			DRM_ERROR("unknown voltage object table\n");
+			return -EINVAL;
+		}
+	}
+	return -EINVAL;
+}
+
+union vram_info {
+	struct _ATOM_VRAM_INFO_V3 v1_3;
+	struct _ATOM_VRAM_INFO_V4 v1_4;
+	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
+};
+
+#define MEM_ID_MASK           0xff000000
+#define MEM_ID_SHIFT          24
+#define CLOCK_RANGE_MASK      0x00ffffff
+#define CLOCK_RANGE_SHIFT     0
+#define LOW_NIBBLE_MASK       0xf
+#define DATA_EQU_PREV         0
+#define DATA_FROM_TABLE       4
+
+int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
+				      u8 module_index,
+				      struct atom_mc_reg_table *reg_table)
+{
+	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
+	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
+	u32 i = 0, j;
+	u16 data_offset, size;
+	union vram_info *vram_info;
+
+	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
+				   &frev, &crev, &data_offset)) {
+		vram_info = (union vram_info *)
+			(adev->mode_info.atom_context->bios + data_offset);
+		switch (frev) {
+		case 1:
+			DRM_ERROR("old table version %d, %d\n", frev, crev);
+			return -EINVAL;
+		case 2:
+			switch (crev) {
+			case 1:
+				if (module_index < vram_info->v2_1.ucNumOfVRAMModule) {
+					ATOM_INIT_REG_BLOCK *reg_block =
+						(ATOM_INIT_REG_BLOCK *)
+						((u8 *)vram_info + le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
+					ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
+						(ATOM_MEMORY_SETTING_DATA_BLOCK *)
+						((u8 *)reg_block + (2 * sizeof(u16)) +
+						 le16_to_cpu(reg_block->usRegIndexTblSize));
+					ATOM_INIT_REG_INDEX_FORMAT *format = &reg_block->asRegIndexBuf[0];
+					num_entries = (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
+							   sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
+					if (num_entries > VBIOS_MC_REGISTER_ARRAY_SIZE)
+						return -EINVAL;
+					while (i < num_entries) {
+						if (format->ucPreRegDataLength & ACCESS_PLACEHOLDER)
+							break;
+						reg_table->mc_reg_address[i].s1 =
+							(u16)(le16_to_cpu(format->usRegIndex));
+						reg_table->mc_reg_address[i].pre_reg_data =
+							(u8)(format->ucPreRegDataLength);
+						i++;
+						format = (ATOM_INIT_REG_INDEX_FORMAT *)
+							((u8 *)format + sizeof(ATOM_INIT_REG_INDEX_FORMAT));
+					}
+					reg_table->last = i;
+					while ((le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK) &&
+					       (num_ranges < VBIOS_MAX_AC_TIMING_ENTRIES)) {
+						t_mem_id = (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
+								>> MEM_ID_SHIFT);
+						if (module_index == t_mem_id) {
+							reg_table->mc_reg_table_entry[num_ranges].mclk_max =
+								(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
+								      >> CLOCK_RANGE_SHIFT);
+							for (i = 0, j = 1; i < reg_table->last; i++) {
+								if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_FROM_TABLE) {
+									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
+										(u32)le32_to_cpu(*((u32 *)reg_data + j));
+									j++;
+								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
+									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
+										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
+								}
+							}
+							num_ranges++;
+						}
+						reg_data = (ATOM_MEMORY_SETTING_DATA_BLOCK *)
+							((u8 *)reg_data + le16_to_cpu(reg_block->usRegDataBlkSize));
+					}
+					if (le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK)
+						return -EINVAL;
+					reg_table->num_entries = num_ranges;
+				} else
+					return -EINVAL;
+				break;
+			default:
+				DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
+				return -EINVAL;
+			}
+			break;
+		default:
+			DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
+			return -EINVAL;
+		}
+		return 0;
+	}
+	return -EINVAL;
+}
+
+void amdgpu_dpm_print_class_info(u32 class, u32 class2)
+{
+	const char *s;
+
+	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
+	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
+	default:
+		s = "none";
+		break;
+	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
+		s = "battery";
+		break;
+	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
+		s = "balanced";
+		break;
+	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
+		s = "performance";
+		break;
+	}
+	printk("\tui class: %s\n", s);
+	printk("\tinternal class:");
+	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
+	    (class2 == 0))
+		pr_cont(" none");
+	else {
+		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
+			pr_cont(" boot");
+		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
+			pr_cont(" thermal");
+		if (class & ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
+			pr_cont(" limited_pwr");
+		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
+			pr_cont(" rest");
+		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
+			pr_cont(" forced");
+		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
+			pr_cont(" 3d_perf");
+		if (class & ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
+			pr_cont(" ovrdrv");
+		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
+			pr_cont(" uvd");
+		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
+			pr_cont(" 3d_low");
+		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
+			pr_cont(" acpi");
+		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
+			pr_cont(" uvd_hd2");
+		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
+			pr_cont(" uvd_hd");
+		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
+			pr_cont(" uvd_sd");
+		if (class2 & ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
+			pr_cont(" limited_pwr2");
+		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
+			pr_cont(" ulv");
+		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
+			pr_cont(" uvd_mvc");
+	}
+	pr_cont("\n");
+}
+
+void amdgpu_dpm_print_cap_info(u32 caps)
+{
+	printk("\tcaps:");
+	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
+		pr_cont(" single_disp");
+	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
+		pr_cont(" video");
+	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
+		pr_cont(" no_dc");
+	pr_cont("\n");
+}
+
+void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
+				struct amdgpu_ps *rps)
+{
+	printk("\tstatus:");
+	if (rps == adev->pm.dpm.current_ps)
+		pr_cont(" c");
+	if (rps == adev->pm.dpm.requested_ps)
+		pr_cont(" r");
+	if (rps == adev->pm.dpm.boot_ps)
+		pr_cont(" b");
+	pr_cont("\n");
+}
+
+void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
+{
+	int i;
+
+	if (adev->powerplay.pp_funcs->print_power_state == NULL)
+		return;
+
+	for (i = 0; i < adev->pm.dpm.num_ps; i++)
+		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
+
+}
+
+union power_info {
+	struct _ATOM_POWERPLAY_INFO info;
+	struct _ATOM_POWERPLAY_INFO_V2 info_2;
+	struct _ATOM_POWERPLAY_INFO_V3 info_3;
+	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
+	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
+	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
+	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
+	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
+};
+
+int amdgpu_get_platform_caps(struct amdgpu_device *adev)
+{
+	struct amdgpu_mode_info *mode_info = &adev->mode_info;
+	union power_info *power_info;
+	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
+	u16 data_offset;
+	u8 frev, crev;
+
+	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
+				   &frev, &crev, &data_offset))
+		return -EINVAL;
+	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
+
+	adev->pm.dpm.platform_caps = le32_to_cpu(power_info->pplib.ulPlatformCaps);
+	adev->pm.dpm.backbias_response_time = le16_to_cpu(power_info->pplib.usBackbiasTime);
+	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info->pplib.usVoltageTime);
+
+	return 0;
+}
+
+union fan_info {
+	struct _ATOM_PPLIB_FANTABLE fan;
+	struct _ATOM_PPLIB_FANTABLE2 fan2;
+	struct _ATOM_PPLIB_FANTABLE3 fan3;
+};
+
+static int amdgpu_parse_clk_voltage_dep_table(struct amdgpu_clock_voltage_dependency_table *amdgpu_table,
+					      ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
+{
+	u32 size = atom_table->ucNumEntries *
+		sizeof(struct amdgpu_clock_voltage_dependency_entry);
+	int i;
+	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
+
+	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
+	if (!amdgpu_table->entries)
+		return -ENOMEM;
+
+	entry = &atom_table->entries[0];
+	for (i = 0; i < atom_table->ucNumEntries; i++) {
+		amdgpu_table->entries[i].clk = le16_to_cpu(entry->usClockLow) |
+			(entry->ucClockHigh << 16);
+		amdgpu_table->entries[i].v = le16_to_cpu(entry->usVoltage);
+		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record *)
+			((u8 *)entry + sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
+	}
+	amdgpu_table->count = atom_table->ucNumEntries;
+
+	return 0;
+}
+
+/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
+#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
+
+int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
+{
+	struct amdgpu_mode_info *mode_info = &adev->mode_info;
+	union power_info *power_info;
+	union fan_info *fan_info;
+	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
+	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
+	u16 data_offset;
+	u8 frev, crev;
+	int ret, i;
+
+	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
+				   &frev, &crev, &data_offset))
+		return -EINVAL;
+	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
+
+	/* fan table */
+	if (le16_to_cpu(power_info->pplib.usTableSize) >=
+	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
+		if (power_info->pplib3.usFanTableOffset) {
+			fan_info = (union fan_info *)(mode_info->atom_context->bios + data_offset +
+						      le16_to_cpu(power_info->pplib3.usFanTableOffset));
+			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
+			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info->fan.usTMin);
+			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info->fan.usTMed);
+			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info->fan.usTHigh);
+			adev->pm.dpm.fan.pwm_min = le16_to_cpu(fan_info->fan.usPWMMin);
+			adev->pm.dpm.fan.pwm_med = le16_to_cpu(fan_info->fan.usPWMMed);
+			adev->pm.dpm.fan.pwm_high = le16_to_cpu(fan_info->fan.usPWMHigh);
+			if (fan_info->fan.ucFanTableFormat >= 2)
+				adev->pm.dpm.fan.t_max = le16_to_cpu(fan_info->fan2.usTMax);
+			else
+				adev->pm.dpm.fan.t_max = 10900;
+			adev->pm.dpm.fan.cycle_delay = 100000;
+			if (fan_info->fan.ucFanTableFormat >= 3) {
+				adev->pm.dpm.fan.control_mode = fan_info->fan3.ucFanControlMode;
+				adev->pm.dpm.fan.default_max_fan_pwm =
+					le16_to_cpu(fan_info->fan3.usFanPWMMax);
+				adev->pm.dpm.fan.default_fan_output_sensitivity = 4836;
+				adev->pm.dpm.fan.fan_output_sensitivity =
+					le16_to_cpu(fan_info->fan3.usFanOutputSensitivity);
+			}
+			adev->pm.dpm.fan.ucode_fan_control = true;
+		}
+	}
+
+	/* clock dependancy tables, shedding tables */
+	if (le16_to_cpu(power_info->pplib.usTableSize) >=
+	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
+		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
+			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
+			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+								 dep_table);
+			if (ret) {
+				amdgpu_free_extended_power_table(adev);
+				return ret;
+			}
+		}
+		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
+			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
+			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+								 dep_table);
+			if (ret) {
+				amdgpu_free_extended_power_table(adev);
+				return ret;
+			}
+		}
+		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
+			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
+			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+								 dep_table);
+			if (ret) {
+				amdgpu_free_extended_power_table(adev);
+				return ret;
+			}
+		}
+		if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
+			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
+			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
+								 dep_table);
+			if (ret) {
+				amdgpu_free_extended_power_table(adev);
+				return ret;
+			}
+		}
+		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
+			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
+				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usMaxClockVoltageOnDCOffset));
+			if (clk_v->ucNumEntries) {
+				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
+					le16_to_cpu(clk_v->entries[0].usSclkLow) |
+					(clk_v->entries[0].ucSclkHigh << 16);
+				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
+					le16_to_cpu(clk_v->entries[0].usMclkLow) |
+					(clk_v->entries[0].ucMclkHigh << 16);
+				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
+					le16_to_cpu(clk_v->entries[0].usVddc);
+				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
+					le16_to_cpu(clk_v->entries[0].usVddci);
+			}
+		}
+		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset) {
+			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
+				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib4.usVddcPhaseShedLimitsTableOffset));
+			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
+
+			adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries =
+				kcalloc(psl->ucNumEntries,
+					sizeof(struct amdgpu_phase_shedding_limits_entry),
+					GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+
+			entry = &psl->entries[0];
+			for (i = 0; i < psl->ucNumEntries; i++) {
+				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
+					le16_to_cpu(entry->usSclkLow) | (entry->ucSclkHigh << 16);
+				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
+					le16_to_cpu(entry->usMclkLow) | (entry->ucMclkHigh << 16);
+				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
+					le16_to_cpu(entry->usVoltage);
+				entry = (ATOM_PPLIB_PhaseSheddingLimits_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
+			}
+			adev->pm.dpm.dyn_state.phase_shedding_limits_table.count =
+				psl->ucNumEntries;
+		}
+	}
+
+	/* cac data */
+	if (le16_to_cpu(power_info->pplib.usTableSize) >=
+	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
+		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info->pplib5.ulTDPLimit);
+		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info->pplib5.ulNearTDPLimit);
+		adev->pm.dpm.near_tdp_limit_adjusted = adev->pm.dpm.near_tdp_limit;
+		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info->pplib5.usTDPODLimit);
+		if (adev->pm.dpm.tdp_od_limit)
+			adev->pm.dpm.power_control = true;
+		else
+			adev->pm.dpm.power_control = false;
+		adev->pm.dpm.tdp_adjustment = 0;
+		adev->pm.dpm.sq_ramping_threshold = le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
+		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info->pplib5.ulCACLeakage);
+		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info->pplib5.usLoadLineSlope);
+		if (power_info->pplib5.usCACLeakageTableOffset) {
+			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
+				(ATOM_PPLIB_CAC_Leakage_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(power_info->pplib5.usCACLeakageTableOffset));
+			ATOM_PPLIB_CAC_Leakage_Record *entry;
+			u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
+			adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			entry = &cac_table->entries[0];
+			for (i = 0; i < cac_table->ucNumEntries; i++) {
+				if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
+					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
+						le16_to_cpu(entry->usVddc1);
+					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
+						le16_to_cpu(entry->usVddc2);
+					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
+						le16_to_cpu(entry->usVddc3);
+				} else {
+					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
+						le16_to_cpu(entry->usVddc);
+					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
+						le32_to_cpu(entry->ulLeakageValue);
+				}
+				entry = (ATOM_PPLIB_CAC_Leakage_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_CAC_Leakage_Record));
+			}
+			adev->pm.dpm.dyn_state.cac_leakage_table.count = cac_table->ucNumEntries;
+		}
+	}
+
+	/* ext tables */
+	if (le16_to_cpu(power_info->pplib.usTableSize) >=
+	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
+		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr = (ATOM_PPLIB_EXTENDEDHEADER *)
+			(mode_info->atom_context->bios + data_offset +
+			 le16_to_cpu(power_info->pplib3.usExtendendedHeaderOffset));
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
+			ext_hdr->usVCETableOffset) {
+			VCEClockInfoArray *array = (VCEClockInfoArray *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1);
+			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *limits =
+				(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
+				 1 + array->ucNumEntries * sizeof(VCEClockInfo));
+			ATOM_PPLIB_VCE_State_Table *states =
+				(ATOM_PPLIB_VCE_State_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
+				 1 + (array->ucNumEntries * sizeof (VCEClockInfo)) +
+				 1 + (limits->numEntries * sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
+			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *entry;
+			ATOM_PPLIB_VCE_State_Record *state_entry;
+			VCEClockInfo *vce_clk;
+			u32 size = limits->numEntries *
+				sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
+			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
+				kzalloc(size, GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
+				limits->numEntries;
+			entry = &limits->entries[0];
+			state_entry = &states->entries[0];
+			for (i = 0; i < limits->numEntries; i++) {
+				vce_clk = (VCEClockInfo *)
+					((u8 *)&array->entries[0] +
+					 (entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
+				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk =
+					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
+				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk =
+					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
+				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
+					le16_to_cpu(entry->usVoltage);
+				entry = (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
+			}
+			adev->pm.dpm.num_of_vce_states =
+					states->numEntries > AMD_MAX_VCE_LEVELS ?
+					AMD_MAX_VCE_LEVELS : states->numEntries;
+			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
+				vce_clk = (VCEClockInfo *)
+					((u8 *)&array->entries[0] +
+					 (state_entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
+				adev->pm.dpm.vce_states[i].evclk =
+					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
+				adev->pm.dpm.vce_states[i].ecclk =
+					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
+				adev->pm.dpm.vce_states[i].clk_idx =
+					state_entry->ucClockInfoIndex & 0x3f;
+				adev->pm.dpm.vce_states[i].pstate =
+					(state_entry->ucClockInfoIndex & 0xc0) >> 6;
+				state_entry = (ATOM_PPLIB_VCE_State_Record *)
+					((u8 *)state_entry + sizeof(ATOM_PPLIB_VCE_State_Record));
+			}
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
+			ext_hdr->usUVDTableOffset) {
+			UVDClockInfoArray *array = (UVDClockInfoArray *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1);
+			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *limits =
+				(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1 +
+				 1 + (array->ucNumEntries * sizeof (UVDClockInfo)));
+			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *entry;
+			u32 size = limits->numEntries *
+				sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
+			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
+				kzalloc(size, GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
+				limits->numEntries;
+			entry = &limits->entries[0];
+			for (i = 0; i < limits->numEntries; i++) {
+				UVDClockInfo *uvd_clk = (UVDClockInfo *)
+					((u8 *)&array->entries[0] +
+					 (entry->ucUVDClockInfoIndex * sizeof(UVDClockInfo)));
+				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
+					le16_to_cpu(uvd_clk->usVClkLow) | (uvd_clk->ucVClkHigh << 16);
+				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
+					le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16);
+				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
+					le16_to_cpu(entry->usVoltage);
+				entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
+			}
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
+			ext_hdr->usSAMUTableOffset) {
+			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
+				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usSAMUTableOffset) + 1);
+			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
+			u32 size = limits->numEntries *
+				sizeof(struct amdgpu_clock_voltage_dependency_entry);
+			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
+				kzalloc(size, GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
+				limits->numEntries;
+			entry = &limits->entries[0];
+			for (i = 0; i < limits->numEntries; i++) {
+				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
+					le16_to_cpu(entry->usSAMClockLow) | (entry->ucSAMClockHigh << 16);
+				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
+					le16_to_cpu(entry->usVoltage);
+				entry = (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
+			}
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
+		    ext_hdr->usPPMTableOffset) {
+			ATOM_PPLIB_PPM_Table *ppm = (ATOM_PPLIB_PPM_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usPPMTableOffset));
+			adev->pm.dpm.dyn_state.ppm_table =
+				kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.ppm_table) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
+			adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
+				le16_to_cpu(ppm->usCpuCoreNumber);
+			adev->pm.dpm.dyn_state.ppm_table->platform_tdp =
+				le32_to_cpu(ppm->ulPlatformTDP);
+			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdp =
+				le32_to_cpu(ppm->ulSmallACPlatformTDP);
+			adev->pm.dpm.dyn_state.ppm_table->platform_tdc =
+				le32_to_cpu(ppm->ulPlatformTDC);
+			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdc =
+				le32_to_cpu(ppm->ulSmallACPlatformTDC);
+			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
+				le32_to_cpu(ppm->ulApuTDP);
+			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
+				le32_to_cpu(ppm->ulDGpuTDP);
+			adev->pm.dpm.dyn_state.ppm_table->dgpu_ulv_power =
+				le32_to_cpu(ppm->ulDGpuUlvPower);
+			adev->pm.dpm.dyn_state.ppm_table->tj_max =
+				le32_to_cpu(ppm->ulTjmax);
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
+			ext_hdr->usACPTableOffset) {
+			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
+				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usACPTableOffset) + 1);
+			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
+			u32 size = limits->numEntries *
+				sizeof(struct amdgpu_clock_voltage_dependency_entry);
+			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
+				kzalloc(size, GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
+				limits->numEntries;
+			entry = &limits->entries[0];
+			for (i = 0; i < limits->numEntries; i++) {
+				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
+					le16_to_cpu(entry->usACPClockLow) | (entry->ucACPClockHigh << 16);
+				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
+					le16_to_cpu(entry->usVoltage);
+				entry = (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
+					((u8 *)entry + sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
+			}
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
+			ext_hdr->usPowerTuneTableOffset) {
+			u8 rev = *(u8 *)(mode_info->atom_context->bios + data_offset +
+					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
+			ATOM_PowerTune_Table *pt;
+			adev->pm.dpm.dyn_state.cac_tdp_table =
+				kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
+			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
+				amdgpu_free_extended_power_table(adev);
+				return -ENOMEM;
+			}
+			if (rev > 0) {
+				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
+					(mode_info->atom_context->bios + data_offset +
+					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
+				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit =
+					ppt->usMaximumPowerDeliveryLimit;
+				pt = &ppt->power_tune_table;
+			} else {
+				ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
+					(mode_info->atom_context->bios + data_offset +
+					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
+				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 255;
+				pt = &ppt->power_tune_table;
+			}
+			adev->pm.dpm.dyn_state.cac_tdp_table->tdp = le16_to_cpu(pt->usTDP);
+			adev->pm.dpm.dyn_state.cac_tdp_table->configurable_tdp =
+				le16_to_cpu(pt->usConfigurableTDP);
+			adev->pm.dpm.dyn_state.cac_tdp_table->tdc = le16_to_cpu(pt->usTDC);
+			adev->pm.dpm.dyn_state.cac_tdp_table->battery_power_limit =
+				le16_to_cpu(pt->usBatteryPowerLimit);
+			adev->pm.dpm.dyn_state.cac_tdp_table->small_power_limit =
+				le16_to_cpu(pt->usSmallPowerLimit);
+			adev->pm.dpm.dyn_state.cac_tdp_table->low_cac_leakage =
+				le16_to_cpu(pt->usLowCACLeakage);
+			adev->pm.dpm.dyn_state.cac_tdp_table->high_cac_leakage =
+				le16_to_cpu(pt->usHighCACLeakage);
+		}
+		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
+				ext_hdr->usSclkVddgfxTableOffset) {
+			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
+				(mode_info->atom_context->bios + data_offset +
+				 le16_to_cpu(ext_hdr->usSclkVddgfxTableOffset));
+			ret = amdgpu_parse_clk_voltage_dep_table(
+					&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
+					dep_table);
+			if (ret) {
+				kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
+				return ret;
+			}
+		}
+	}
+
+	return 0;
+}
+
+void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
+{
+	struct amdgpu_dpm_dynamic_state *dyn_state = &adev->pm.dpm.dyn_state;
+
+	kfree(dyn_state->vddc_dependency_on_sclk.entries);
+	kfree(dyn_state->vddci_dependency_on_mclk.entries);
+	kfree(dyn_state->vddc_dependency_on_mclk.entries);
+	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
+	kfree(dyn_state->cac_leakage_table.entries);
+	kfree(dyn_state->phase_shedding_limits_table.entries);
+	kfree(dyn_state->ppm_table);
+	kfree(dyn_state->cac_tdp_table);
+	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
+	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
+	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
+	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
+	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
+}
+
+static const char *pp_lib_thermal_controller_names[] = {
+	"NONE",
+	"lm63",
+	"adm1032",
+	"adm1030",
+	"max6649",
+	"lm64",
+	"f75375",
+	"RV6xx",
+	"RV770",
+	"adt7473",
+	"NONE",
+	"External GPIO",
+	"Evergreen",
+	"emc2103",
+	"Sumo",
+	"Northern Islands",
+	"Southern Islands",
+	"lm96163",
+	"Sea Islands",
+	"Kaveri/Kabini",
+};
+
+void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
+{
+	struct amdgpu_mode_info *mode_info = &adev->mode_info;
+	ATOM_PPLIB_POWERPLAYTABLE *power_table;
+	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
+	ATOM_PPLIB_THERMALCONTROLLER *controller;
+	struct amdgpu_i2c_bus_rec i2c_bus;
+	u16 data_offset;
+	u8 frev, crev;
+
+	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
+				   &frev, &crev, &data_offset))
+		return;
+	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
+		(mode_info->atom_context->bios + data_offset);
+	controller = &power_table->sThermalController;
+
+	/* add the i2c bus for thermal/fan chip */
+	if (controller->ucType > 0) {
+		if (controller->ucFanParameters & ATOM_PP_FANPARAMETERS_NOFAN)
+			adev->pm.no_fan = true;
+		adev->pm.fan_pulses_per_revolution =
+			controller->ucFanParameters & ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
+		if (adev->pm.fan_pulses_per_revolution) {
+			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
+			adev->pm.fan_max_rpm = controller->ucFanMaxRPM;
+		}
+		if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_RV6XX;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV770) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_RV770;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_EVERGREEN;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SUMO) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_SUMO;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_NISLANDS) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SISLANDS) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_CISLANDS) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_KAVERI) {
+			DRM_INFO("Internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
+		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
+			DRM_INFO("External GPIO thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
+		} else if (controller->ucType ==
+			   ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
+			DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
+		} else if (controller->ucType ==
+			   ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
+			DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
+		} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
+			DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
+				 pp_lib_thermal_controller_names[controller->ucType],
+				 controller->ucI2cAddress >> 1,
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
+			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev, controller->ucI2cLine);
+			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev, &i2c_bus);
+			if (adev->pm.i2c_bus) {
+				struct i2c_board_info info = { };
+				const char *name = pp_lib_thermal_controller_names[controller->ucType];
+				info.addr = controller->ucI2cAddress >> 1;
+				strlcpy(info.type, name, sizeof(info.type));
+				i2c_new_client_device(&adev->pm.i2c_bus->adapter, &info);
+			}
+		} else {
+			DRM_INFO("Unknown thermal controller type %d at 0x%02x %s fan control\n",
+				 controller->ucType,
+				 controller->ucI2cAddress >> 1,
+				 (controller->ucFanParameters &
+				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+		}
+	}
+}
+
+struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32 idx)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (idx < adev->pm.dpm.num_of_vce_states)
+		return &adev->pm.dpm.vce_states[idx];
+
+	return NULL;
+}
+
+static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
+						     enum amd_pm_state_type dpm_state)
+{
+	int i;
+	struct amdgpu_ps *ps;
+	u32 ui_class;
+	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
+		true : false;
+
+	/* check if the vblank period is too short to adjust the mclk */
+	if (single_display && adev->powerplay.pp_funcs->vblank_too_short) {
+		if (amdgpu_dpm_vblank_too_short(adev))
+			single_display = false;
+	}
+
+	/* certain older asics have a separare 3D performance state,
+	 * so try that first if the user selected performance
+	 */
+	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
+		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
+	/* balanced states don't exist at the moment */
+	if (dpm_state == POWER_STATE_TYPE_BALANCED)
+		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
+
+restart_search:
+	/* Pick the best power state based on current conditions */
+	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
+		ps = &adev->pm.dpm.ps[i];
+		ui_class = ps->class & ATOM_PPLIB_CLASSIFICATION_UI_MASK;
+		switch (dpm_state) {
+		/* user states */
+		case POWER_STATE_TYPE_BATTERY:
+			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
+				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
+					if (single_display)
+						return ps;
+				} else
+					return ps;
+			}
+			break;
+		case POWER_STATE_TYPE_BALANCED:
+			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
+				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
+					if (single_display)
+						return ps;
+				} else
+					return ps;
+			}
+			break;
+		case POWER_STATE_TYPE_PERFORMANCE:
+			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
+				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
+					if (single_display)
+						return ps;
+				} else
+					return ps;
+			}
+			break;
+		/* internal states */
+		case POWER_STATE_TYPE_INTERNAL_UVD:
+			if (adev->pm.dpm.uvd_ps)
+				return adev->pm.dpm.uvd_ps;
+			else
+				break;
+		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
+			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_BOOT:
+			return adev->pm.dpm.boot_ps;
+		case POWER_STATE_TYPE_INTERNAL_THERMAL:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_ACPI:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_ULV:
+			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
+				return ps;
+			break;
+		case POWER_STATE_TYPE_INTERNAL_3DPERF:
+			if (ps->class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
+				return ps;
+			break;
+		default:
+			break;
+		}
+	}
+	/* use a fallback state if we didn't match */
+	switch (dpm_state) {
+	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
+		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
+		goto restart_search;
+	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
+	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
+	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
+		if (adev->pm.dpm.uvd_ps) {
+			return adev->pm.dpm.uvd_ps;
+		} else {
+			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
+			goto restart_search;
+		}
+	case POWER_STATE_TYPE_INTERNAL_THERMAL:
+		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
+		goto restart_search;
+	case POWER_STATE_TYPE_INTERNAL_ACPI:
+		dpm_state = POWER_STATE_TYPE_BATTERY;
+		goto restart_search;
+	case POWER_STATE_TYPE_BATTERY:
+	case POWER_STATE_TYPE_BALANCED:
+	case POWER_STATE_TYPE_INTERNAL_3DPERF:
+		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
+		goto restart_search;
+	default:
+		break;
+	}
+
+	return NULL;
+}
+
+int amdgpu_dpm_change_power_state_locked(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ps *ps;
+	enum amd_pm_state_type dpm_state;
+	int ret;
+	bool equal = false;
+
+	/* if dpm init failed */
+	if (!adev->pm.dpm_enabled)
+		return 0;
+
+	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
+		/* add other state override checks here */
+		if ((!adev->pm.dpm.thermal_active) &&
+		    (!adev->pm.dpm.uvd_active))
+			adev->pm.dpm.state = adev->pm.dpm.user_state;
+	}
+	dpm_state = adev->pm.dpm.state;
+
+	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
+	if (ps)
+		adev->pm.dpm.requested_ps = ps;
+	else
+		return -EINVAL;
+
+	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs->print_power_state) {
+		printk("switching from power state:\n");
+		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.current_ps);
+		printk("switching to power state:\n");
+		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.requested_ps);
+	}
+
+	/* update whether vce is active */
+	ps->vce_active = adev->pm.dpm.vce_active;
+	if (adev->powerplay.pp_funcs->display_configuration_changed)
+		amdgpu_dpm_display_configuration_changed(adev);
+
+	ret = amdgpu_dpm_pre_set_power_state(adev);
+	if (ret)
+		return ret;
+
+	if (adev->powerplay.pp_funcs->check_state_equal) {
+		if (0 != amdgpu_dpm_check_state_equal(adev, adev->pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
+			equal = false;
+	}
+
+	if (equal)
+		return 0;
+
+	if (adev->powerplay.pp_funcs->set_power_state)
+		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
+
+	amdgpu_dpm_post_set_power_state(adev);
+
+	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
+	adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
+
+	if (adev->powerplay.pp_funcs->force_performance_level) {
+		if (adev->pm.dpm.thermal_active) {
+			enum amd_dpm_forced_level level = adev->pm.dpm.forced_level;
+			/* force low perf level for thermal */
+			amdgpu_dpm_force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
+			/* save the user's level */
+			adev->pm.dpm.forced_level = level;
+		} else {
+			/* otherwise, user selected level */
+			amdgpu_dpm_force_performance_level(adev, adev->pm.dpm.forced_level);
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
new file mode 100644
index 000000000000..4adc765c8824
--- /dev/null
+++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef __LEGACY_DPM_H__
+#define __LEGACY_DPM_H__
+
+int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
+					    u32 clock,
+					    bool strobe_mode,
+					    struct atom_mpll_param *mpll_param);
+
+void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
+					     u32 eng_clock, u32 mem_clock);
+
+void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
+					  u16 *vddc, u16 *vddci, u16 *mvdd);
+
+int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
+			     u16 voltage_id, u16 *voltage);
+
+int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
+						      u16 *voltage,
+						      u16 leakage_idx);
+
+int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
+			      u8 voltage_type,
+			      u8 *svd_gpio_id, u8 *svc_gpio_id);
+
+bool
+amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
+				u8 voltage_type, u8 voltage_mode);
+int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
+				      u8 voltage_type, u8 voltage_mode,
+				      struct atom_voltage_table *voltage_table);
+
+int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
+				      u8 module_index,
+				      struct atom_mc_reg_table *reg_table);
+
+void amdgpu_dpm_print_class_info(u32 class, u32 class2);
+void amdgpu_dpm_print_cap_info(u32 caps);
+void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
+				struct amdgpu_ps *rps);
+int amdgpu_get_platform_caps(struct amdgpu_device *adev);
+int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
+void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
+void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
+struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32 idx);
+int amdgpu_dpm_change_power_state_locked(void *handle);
+void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
+#endif
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
index 4f84d8b893f1..a2881c90d187 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
@@ -37,6 +37,7 @@
 #include <linux/math64.h>
 #include <linux/seq_file.h>
 #include <linux/firmware.h>
+#include <legacy_dpm.h>
 
 #define MC_CG_ARB_FREQ_F0           0x0a
 #define MC_CG_ARB_FREQ_F1           0x0b
@@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs si_dpm_funcs = {
 	.check_state_equal = &si_check_state_equal,
 	.get_vce_clock_state = amdgpu_get_vce_clock_state,
 	.read_sensor = &si_dpm_read_sensor,
+	.change_power_state = amdgpu_dpm_change_power_state_locked,
 };
 
 static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 08/17] drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (6 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv) Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 09/17] drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations Evan Quan
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

As it lables an internal pm state and amdgpu_pm structure is the more
proper place than amdgpu_device structure for it.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I7890e8fe7af2ecd8591d30442340deb8773bacc3
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     | 1 -
 drivers/gpu/drm/amd/pm/amdgpu_pm.c      | 6 +++---
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h | 2 ++
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index c5cfe2926ca1..c987813a4996 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -950,7 +950,6 @@ struct amdgpu_device {
 
 	/* powerplay */
 	struct amd_powerplay		powerplay;
-	bool				pp_force_state_enabled;
 
 	/* smu */
 	struct smu_context		smu;
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index 3382d30b5d90..fa2f4e11e94e 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -469,7 +469,7 @@ static ssize_t amdgpu_get_pp_force_state(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if (adev->pp_force_state_enabled)
+	if (adev->pm.pp_force_state_enabled)
 		return amdgpu_get_pp_cur_state(dev, attr, buf);
 	else
 		return sysfs_emit(buf, "\n");
@@ -492,7 +492,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	adev->pp_force_state_enabled = false;
+	adev->pm.pp_force_state_enabled = false;
 
 	if (strlen(buf) == 1)
 		return count;
@@ -523,7 +523,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 		if (ret)
 			goto err_out;
 
-		adev->pp_force_state_enabled = true;
+		adev->pm.pp_force_state_enabled = true;
 	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 295d2902aef7..1462c4933ca1 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -335,6 +335,8 @@ struct amdgpu_pm {
 	struct list_head	pm_attr_list;
 
 	atomic_t		pwr_state[AMD_IP_BLOCK_TYPE_NUM];
+
+	bool			pp_force_state_enabled;
 };
 
 #define R600_SSTU_DFLT                               0
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 09/17] drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (7 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 08/17] drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 10/17] drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c Evan Quan
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Drop cross callings and multi-function APIs. Also avoid exposing
internal implementations details.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I55e5ab3da6a70482f5f5d8c256eed2f754feae20
---
 .../gpu/drm/amd/include/kgd_pp_interface.h    |   2 +-
 drivers/gpu/drm/amd/pm/Makefile               |   2 +-
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 222 +++---------------
 drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c  |  94 ++++++++
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   2 -
 .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h  |  32 +++
 .../gpu/drm/amd/pm/powerplay/amd_powerplay.c  |  39 ++-
 drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |   6 +-
 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c |  60 ++++-
 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   3 +-
 drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |  41 +++-
 11 files changed, 295 insertions(+), 208 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c
 create mode 100644 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h

diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index cdf724dcf832..7919e96e772b 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -404,7 +404,7 @@ struct amd_pm_funcs {
 	int (*get_dpm_clock_table)(void *handle,
 				   struct dpm_clocks *clock_table);
 	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t *size);
-	int (*change_power_state)(void *handle);
+	void (*pm_compute_clocks)(void *handle);
 };
 
 struct metrics_table_header {
diff --git a/drivers/gpu/drm/amd/pm/Makefile b/drivers/gpu/drm/amd/pm/Makefile
index 8cf6eff1ea93..d35ffde387f1 100644
--- a/drivers/gpu/drm/amd/pm/Makefile
+++ b/drivers/gpu/drm/amd/pm/Makefile
@@ -40,7 +40,7 @@ AMD_PM = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/,$(PM_LIBS)))
 
 include $(AMD_PM)
 
-PM_MGR = amdgpu_dpm.o amdgpu_pm.o
+PM_MGR = amdgpu_dpm.o amdgpu_pm.o amdgpu_dpm_internal.o
 
 AMD_PM_POWER = $(addprefix $(AMD_PM_PATH)/,$(PM_MGR))
 
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index c6801d10cde6..1399b4426080 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -37,73 +37,6 @@
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
 
-static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
-{
-	struct drm_device *ddev = adev_to_drm(adev);
-	struct drm_crtc *crtc;
-	struct amdgpu_crtc *amdgpu_crtc;
-
-	adev->pm.dpm.new_active_crtcs = 0;
-	adev->pm.dpm.new_active_crtc_count = 0;
-	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
-		list_for_each_entry(crtc,
-				    &ddev->mode_config.crtc_list, head) {
-			amdgpu_crtc = to_amdgpu_crtc(crtc);
-			if (amdgpu_crtc->enabled) {
-				adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
-				adev->pm.dpm.new_active_crtc_count++;
-			}
-		}
-	}
-}
-
-u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
-{
-	struct drm_device *dev = adev_to_drm(adev);
-	struct drm_crtc *crtc;
-	struct amdgpu_crtc *amdgpu_crtc;
-	u32 vblank_in_pixels;
-	u32 vblank_time_us = 0xffffffff; /* if the displays are off, vblank time is max */
-
-	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
-		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-			amdgpu_crtc = to_amdgpu_crtc(crtc);
-			if (crtc->enabled && amdgpu_crtc->enabled && amdgpu_crtc->hw_mode.clock) {
-				vblank_in_pixels =
-					amdgpu_crtc->hw_mode.crtc_htotal *
-					(amdgpu_crtc->hw_mode.crtc_vblank_end -
-					amdgpu_crtc->hw_mode.crtc_vdisplay +
-					(amdgpu_crtc->v_border * 2));
-
-				vblank_time_us = vblank_in_pixels * 1000 / amdgpu_crtc->hw_mode.clock;
-				break;
-			}
-		}
-	}
-
-	return vblank_time_us;
-}
-
-static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
-{
-	struct drm_device *dev = adev_to_drm(adev);
-	struct drm_crtc *crtc;
-	struct amdgpu_crtc *amdgpu_crtc;
-	u32 vrefresh = 0;
-
-	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
-		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-			amdgpu_crtc = to_amdgpu_crtc(crtc);
-			if (crtc->enabled && amdgpu_crtc->enabled && amdgpu_crtc->hw_mode.clock) {
-				vrefresh = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
-				break;
-			}
-		}
-	}
-
-	return vrefresh;
-}
-
 int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
@@ -432,111 +365,35 @@ int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors senso
 	return ret;
 }
 
-void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
-{
-	struct amdgpu_device *adev =
-		container_of(work, struct amdgpu_device,
-			     pm.dpm.thermal.work);
-	/* switch to the thermal state */
-	enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
-	int temp, size = sizeof(temp);
-
-	if (!adev->pm.dpm_enabled)
-		return;
-
-	if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_GPU_TEMP,
-				    (void *)&temp, &size)) {
-		if (temp < adev->pm.dpm.thermal.min_temp)
-			/* switch back the user state */
-			dpm_state = adev->pm.dpm.user_state;
-	} else {
-		if (adev->pm.dpm.thermal.high_to_low)
-			/* switch back the user state */
-			dpm_state = adev->pm.dpm.user_state;
-	}
-	mutex_lock(&adev->pm.mutex);
-	if (dpm_state == POWER_STATE_TYPE_INTERNAL_THERMAL)
-		adev->pm.dpm.thermal_active = true;
-	else
-		adev->pm.dpm.thermal_active = false;
-	adev->pm.dpm.state = dpm_state;
-	mutex_unlock(&adev->pm.mutex);
-
-	amdgpu_pm_compute_clocks(adev);
-}
-
 void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 {
-	int i = 0;
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
-	if (!adev->pm.dpm_enabled)
+	if (!pp_funcs->pm_compute_clocks)
 		return;
 
-	if (adev->mode_info.num_crtc)
-		amdgpu_display_bandwidth_update(adev);
-
-	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
-		struct amdgpu_ring *ring = adev->rings[i];
-		if (ring && ring->sched.ready)
-			amdgpu_fence_wait_empty(ring);
-	}
-
-	if ((adev->family == AMDGPU_FAMILY_SI) ||
-	     (adev->family == AMDGPU_FAMILY_KV)) {
-		amdgpu_dpm_get_active_displays(adev);
-		adev->powerplay.pp_funcs->change_power_state(adev->powerplay.pp_handle);
-	} else {
-		if (!amdgpu_device_has_dc_support(adev)) {
-			amdgpu_dpm_get_active_displays(adev);
-			adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count;
-			adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev);
-			adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev);
-			/* we have issues with mclk switching with
-			 * refresh rates over 120 hz on the non-DC code.
-			 */
-			if (adev->pm.pm_display_cfg.vrefresh > 120)
-				adev->pm.pm_display_cfg.min_vblank_time = 0;
-			if (adev->powerplay.pp_funcs->display_configuration_change)
-				adev->powerplay.pp_funcs->display_configuration_change(
-							adev->powerplay.pp_handle,
-							&adev->pm.pm_display_cfg);
-		}
-		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
-	}
+	pp_funcs->pm_compute_clocks(adev->powerplay.pp_handle);
 }
 
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 {
 	int ret = 0;
 
-	if (adev->family == AMDGPU_FAMILY_SI) {
-		mutex_lock(&adev->pm.mutex);
-		if (enable) {
-			adev->pm.dpm.uvd_active = true;
-			adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
-		} else {
-			adev->pm.dpm.uvd_active = false;
-		}
-		mutex_unlock(&adev->pm.mutex);
+	ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_UVD, !enable);
+	if (ret)
+		DRM_ERROR("Dpm %s uvd failed, ret = %d. \n",
+			  enable ? "enable" : "disable", ret);
 
-		amdgpu_pm_compute_clocks(adev);
-	} else {
-		ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_UVD, !enable);
-		if (ret)
-			DRM_ERROR("Dpm %s uvd failed, ret = %d. \n",
-				  enable ? "enable" : "disable", ret);
-
-		/* enable/disable Low Memory PState for UVD (4k videos) */
-		if (adev->asic_type == CHIP_STONEY &&
-			adev->uvd.decode_image_width >= WIDTH_4K) {
-			struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
-
-			if (hwmgr && hwmgr->hwmgr_func &&
-			    hwmgr->hwmgr_func->update_nbdpm_pstate)
-				hwmgr->hwmgr_func->update_nbdpm_pstate(hwmgr,
-								       !enable,
-								       true);
-		}
+	/* enable/disable Low Memory PState for UVD (4k videos) */
+	if (adev->asic_type == CHIP_STONEY &&
+		adev->uvd.decode_image_width >= WIDTH_4K) {
+		struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+
+		if (hwmgr && hwmgr->hwmgr_func &&
+		    hwmgr->hwmgr_func->update_nbdpm_pstate)
+			hwmgr->hwmgr_func->update_nbdpm_pstate(hwmgr,
+							       !enable,
+							       true);
 	}
 }
 
@@ -544,24 +401,10 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
 {
 	int ret = 0;
 
-	if (adev->family == AMDGPU_FAMILY_SI) {
-		mutex_lock(&adev->pm.mutex);
-		if (enable) {
-			adev->pm.dpm.vce_active = true;
-			/* XXX select vce level based on ring/task */
-			adev->pm.dpm.vce_level = AMD_VCE_LEVEL_AC_ALL;
-		} else {
-			adev->pm.dpm.vce_active = false;
-		}
-		mutex_unlock(&adev->pm.mutex);
-
-		amdgpu_pm_compute_clocks(adev);
-	} else {
-		ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_VCE, !enable);
-		if (ret)
-			DRM_ERROR("Dpm %s vce failed, ret = %d. \n",
-				  enable ? "enable" : "disable", ret);
-	}
+	ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_VCE, !enable);
+	if (ret)
+		DRM_ERROR("Dpm %s vce failed, ret = %d. \n",
+			  enable ? "enable" : "disable", ret);
 }
 
 void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
@@ -730,10 +573,7 @@ void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
 {
 	adev->pm.dpm.user_state = state;
 
-	if (adev->powerplay.pp_funcs->dispatch_tasks)
-		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
-	else
-		amdgpu_pm_compute_clocks(adev);
+	amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
 }
 
 enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
@@ -903,12 +743,9 @@ int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
 
 	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
 
-	if (amdgpu_dpm_dispatch_task(adev,
-				     AMD_PP_TASK_READJUST_POWER_STATE,
-				     NULL) == -EOPNOTSUPP) {
-		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-		amdgpu_pm_compute_clocks(adev);
-	}
+	amdgpu_dpm_dispatch_task(adev,
+				 AMD_PP_TASK_READJUST_POWER_STATE,
+				 NULL);
 
 	return 0;
 }
@@ -932,12 +769,9 @@ int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
 
 	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
 
-	if (amdgpu_dpm_dispatch_task(adev,
-				     AMD_PP_TASK_READJUST_POWER_STATE,
-				     NULL) == -EOPNOTSUPP) {
-		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-		amdgpu_pm_compute_clocks(adev);
-	}
+	amdgpu_dpm_dispatch_task(adev,
+				 AMD_PP_TASK_READJUST_POWER_STATE,
+				 NULL);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c
new file mode 100644
index 000000000000..ba5f6413412d
--- /dev/null
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "amdgpu.h"
+#include "amdgpu_display.h"
+#include "hwmgr.h"
+#include "amdgpu_smu.h"
+
+void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
+{
+	struct drm_device *ddev = adev_to_drm(adev);
+	struct drm_crtc *crtc;
+	struct amdgpu_crtc *amdgpu_crtc;
+
+	adev->pm.dpm.new_active_crtcs = 0;
+	adev->pm.dpm.new_active_crtc_count = 0;
+	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+		list_for_each_entry(crtc,
+				    &ddev->mode_config.crtc_list, head) {
+			amdgpu_crtc = to_amdgpu_crtc(crtc);
+			if (amdgpu_crtc->enabled) {
+				adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
+				adev->pm.dpm.new_active_crtc_count++;
+			}
+		}
+	}
+}
+
+u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
+{
+	struct drm_device *dev = adev_to_drm(adev);
+	struct drm_crtc *crtc;
+	struct amdgpu_crtc *amdgpu_crtc;
+	u32 vblank_in_pixels;
+	u32 vblank_time_us = 0xffffffff; /* if the displays are off, vblank time is max */
+
+	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+			amdgpu_crtc = to_amdgpu_crtc(crtc);
+			if (crtc->enabled && amdgpu_crtc->enabled && amdgpu_crtc->hw_mode.clock) {
+				vblank_in_pixels =
+					amdgpu_crtc->hw_mode.crtc_htotal *
+					(amdgpu_crtc->hw_mode.crtc_vblank_end -
+					amdgpu_crtc->hw_mode.crtc_vdisplay +
+					(amdgpu_crtc->v_border * 2));
+
+				vblank_time_us = vblank_in_pixels * 1000 / amdgpu_crtc->hw_mode.clock;
+				break;
+			}
+		}
+	}
+
+	return vblank_time_us;
+}
+
+u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
+{
+	struct drm_device *dev = adev_to_drm(adev);
+	struct drm_crtc *crtc;
+	struct amdgpu_crtc *amdgpu_crtc;
+	u32 vrefresh = 0;
+
+	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+			amdgpu_crtc = to_amdgpu_crtc(crtc);
+			if (crtc->enabled && amdgpu_crtc->enabled && amdgpu_crtc->hw_mode.clock) {
+				vrefresh = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
+				break;
+			}
+		}
+	}
+
+	return vrefresh;
+}
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 1462c4933ca1..5b68f9fe4fde 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -420,8 +420,6 @@ void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
 int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
 			   void *data, uint32_t *size);
 
-void amdgpu_dpm_thermal_work_handler(struct work_struct *work);
-
 void amdgpu_pm_compute_clocks(struct amdgpu_device *adev);
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h
new file mode 100644
index 000000000000..5c2a89f0d5d5
--- /dev/null
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef __AMDGPU_DPM_INTERNAL_H__
+#define __AMDGPU_DPM_INTERNAL_H__
+
+void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev);
+
+u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
+
+u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev);
+
+#endif
diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
index 20cb234d5061..d57d5c28c013 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
@@ -31,7 +31,8 @@
 #include "power_state.h"
 #include "amdgpu.h"
 #include "hwmgr.h"
-
+#include "amdgpu_dpm_internal.h"
+#include "amdgpu_display.h"
 
 static const struct amd_pm_funcs pp_dpm_funcs;
 
@@ -1678,6 +1679,41 @@ static int pp_get_prv_buffer_details(void *handle, void **addr, size_t *size)
 	return 0;
 }
 
+static void pp_pm_compute_clocks(void *handle)
+{
+	struct pp_hwmgr *hwmgr = handle;
+	struct amdgpu_device *adev = hwmgr->adev;
+	int i = 0;
+
+	if (adev->mode_info.num_crtc)
+		amdgpu_display_bandwidth_update(adev);
+
+	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+		struct amdgpu_ring *ring = adev->rings[i];
+		if (ring && ring->sched.ready)
+			amdgpu_fence_wait_empty(ring);
+	}
+
+	if (!amdgpu_device_has_dc_support(adev)) {
+		amdgpu_dpm_get_active_displays(adev);
+		adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count;
+		adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev);
+		adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev);
+		/* we have issues with mclk switching with
+		 * refresh rates over 120 hz on the non-DC code.
+		 */
+		if (adev->pm.pm_display_cfg.vrefresh > 120)
+			adev->pm.pm_display_cfg.min_vblank_time = 0;
+
+		pp_display_configuration_change(handle,
+						&adev->pm.pm_display_cfg);
+	}
+
+	pp_dpm_dispatch_tasks(handle,
+			      AMD_PP_TASK_DISPLAY_CONFIG_CHANGE,
+			      NULL);
+}
+
 static const struct amd_pm_funcs pp_dpm_funcs = {
 	.load_firmware = pp_dpm_load_fw,
 	.wait_for_fw_loading_complete = pp_dpm_fw_loading_complete,
@@ -1742,4 +1778,5 @@ static const struct amd_pm_funcs pp_dpm_funcs = {
 	.get_gpu_metrics = pp_get_gpu_metrics,
 	.gfx_state_change_set = pp_gfx_state_change_set,
 	.get_smu_prv_buf_details = pp_get_prv_buffer_details,
+	.pm_compute_clocks = pp_pm_compute_clocks,
 };
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
index 90f4c65659e2..72824ef61edd 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
@@ -3088,7 +3088,7 @@ static int kv_dpm_hw_init(void *handle)
 	else
 		adev->pm.dpm_enabled = true;
 	mutex_unlock(&adev->pm.mutex);
-	amdgpu_pm_compute_clocks(adev);
+	amdgpu_legacy_dpm_compute_clocks(adev);
 	return ret;
 }
 
@@ -3136,7 +3136,7 @@ static int kv_dpm_resume(void *handle)
 			adev->pm.dpm_enabled = true;
 		mutex_unlock(&adev->pm.mutex);
 		if (adev->pm.dpm_enabled)
-			amdgpu_pm_compute_clocks(adev);
+			amdgpu_legacy_dpm_compute_clocks(adev);
 	}
 	return 0;
 }
@@ -3390,7 +3390,7 @@ static const struct amd_pm_funcs kv_dpm_funcs = {
 	.get_vce_clock_state = amdgpu_get_vce_clock_state,
 	.check_state_equal = kv_check_state_equal,
 	.read_sensor = &kv_dpm_read_sensor,
-	.change_power_state = amdgpu_dpm_change_power_state_locked,
+	.pm_compute_clocks = amdgpu_legacy_dpm_compute_clocks,
 };
 
 static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
index 9427c1026e1d..9e6bc562fc5a 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
@@ -26,6 +26,8 @@
 #include "atom.h"
 #include "amd_pcie.h"
 #include "legacy_dpm.h"
+#include "amdgpu_dpm_internal.h"
+#include "amdgpu_display.h"
 
 #define amdgpu_dpm_pre_set_power_state(adev) \
 		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
@@ -1378,9 +1380,8 @@ static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
 	return NULL;
 }
 
-int amdgpu_dpm_change_power_state_locked(void *handle)
+static int amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 {
-	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_ps *ps;
 	enum amd_pm_state_type dpm_state;
 	int ret;
@@ -1451,3 +1452,58 @@ int amdgpu_dpm_change_power_state_locked(void *handle)
 
 	return 0;
 }
+
+void amdgpu_legacy_dpm_compute_clocks(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int i = 0;
+
+	if (adev->mode_info.num_crtc)
+		amdgpu_display_bandwidth_update(adev);
+
+	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+		struct amdgpu_ring *ring = adev->rings[i];
+		if (ring && ring->sched.ready)
+			amdgpu_fence_wait_empty(ring);
+	}
+
+	amdgpu_dpm_get_active_displays(adev);
+
+	amdgpu_dpm_change_power_state_locked(adev);
+}
+
+void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
+{
+	struct amdgpu_device *adev =
+		container_of(work, struct amdgpu_device,
+			     pm.dpm.thermal.work);
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	/* switch to the thermal state */
+	enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
+	int temp, size = sizeof(temp);
+
+	if (!adev->pm.dpm_enabled)
+		return;
+
+	if (!pp_funcs->read_sensor(adev->powerplay.pp_handle,
+				   AMDGPU_PP_SENSOR_GPU_TEMP,
+				   (void *)&temp,
+				   &size)) {
+		if (temp < adev->pm.dpm.thermal.min_temp)
+			/* switch back the user state */
+			dpm_state = adev->pm.dpm.user_state;
+	} else {
+		if (adev->pm.dpm.thermal.high_to_low)
+			/* switch back the user state */
+			dpm_state = adev->pm.dpm.user_state;
+	}
+
+	if (dpm_state == POWER_STATE_TYPE_INTERNAL_THERMAL)
+		adev->pm.dpm.thermal_active = true;
+	else
+		adev->pm.dpm.thermal_active = false;
+
+	adev->pm.dpm.state = dpm_state;
+
+	amdgpu_legacy_dpm_compute_clocks(adev->powerplay.pp_handle);
+}
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
index 4adc765c8824..3c1f02a63376 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
+++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
@@ -65,6 +65,7 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
 void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
 void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
 struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32 idx);
-int amdgpu_dpm_change_power_state_locked(void *handle);
 void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
+void amdgpu_legacy_dpm_compute_clocks(void *handle);
+void amdgpu_dpm_thermal_work_handler(struct work_struct *work);
 #endif
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
index a2881c90d187..b8dbddefb74e 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
@@ -3891,6 +3891,40 @@ static int si_set_boot_state(struct amdgpu_device *adev)
 }
 #endif
 
+static int si_set_powergating_by_smu(void *handle,
+				     uint32_t block_type,
+				     bool gate)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (block_type) {
+	case AMD_IP_BLOCK_TYPE_UVD:
+		if (!gate) {
+			adev->pm.dpm.uvd_active = true;
+			adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
+		} else {
+			adev->pm.dpm.uvd_active = false;
+		}
+
+		amdgpu_legacy_dpm_compute_clocks(handle);
+		break;
+	case AMD_IP_BLOCK_TYPE_VCE:
+		if (!gate) {
+			adev->pm.dpm.vce_active = true;
+			/* XXX select vce level based on ring/task */
+			adev->pm.dpm.vce_level = AMD_VCE_LEVEL_AC_ALL;
+		} else {
+			adev->pm.dpm.vce_active = false;
+		}
+
+		amdgpu_legacy_dpm_compute_clocks(handle);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
 static int si_set_sw_state(struct amdgpu_device *adev)
 {
 	return (amdgpu_si_send_msg_to_smc(adev, PPSMC_MSG_SwitchToSwState) == PPSMC_Result_OK) ?
@@ -7801,7 +7835,7 @@ static int si_dpm_hw_init(void *handle)
 	else
 		adev->pm.dpm_enabled = true;
 	mutex_unlock(&adev->pm.mutex);
-	amdgpu_pm_compute_clocks(adev);
+	amdgpu_legacy_dpm_compute_clocks(adev);
 	return ret;
 }
 
@@ -7849,7 +7883,7 @@ static int si_dpm_resume(void *handle)
 			adev->pm.dpm_enabled = true;
 		mutex_unlock(&adev->pm.mutex);
 		if (adev->pm.dpm_enabled)
-			amdgpu_pm_compute_clocks(adev);
+			amdgpu_legacy_dpm_compute_clocks(adev);
 	}
 	return 0;
 }
@@ -8094,6 +8128,7 @@ static const struct amd_pm_funcs si_dpm_funcs = {
 	.print_power_state = &si_dpm_print_power_state,
 	.debugfs_print_current_performance_level = &si_dpm_debugfs_print_current_performance_level,
 	.force_performance_level = &si_dpm_force_performance_level,
+	.set_powergating_by_smu = &si_set_powergating_by_smu,
 	.vblank_too_short = &si_dpm_vblank_too_short,
 	.set_fan_control_mode = &si_dpm_set_fan_control_mode,
 	.get_fan_control_mode = &si_dpm_get_fan_control_mode,
@@ -8102,7 +8137,7 @@ static const struct amd_pm_funcs si_dpm_funcs = {
 	.check_state_equal = &si_check_state_equal,
 	.get_vce_clock_state = amdgpu_get_vce_clock_state,
 	.read_sensor = &si_dpm_read_sensor,
-	.change_power_state = amdgpu_dpm_change_power_state_locked,
+	.pm_compute_clocks = amdgpu_legacy_dpm_compute_clocks,
 };
 
 static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 10/17] drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (8 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 09/17] drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task() Evan Quan
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Instead of putting them in amdgpu_dpm.c.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Ieb7ed5fb6140401a7692b401c5a42dc53da92af8
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c                | 14 --------------
 drivers/gpu/drm/amd/pm/inc/hwmgr.h                 |  3 ---
 .../gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c    | 10 +++++++++-
 3 files changed, 9 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 1399b4426080..c6299e406848 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -32,8 +32,6 @@
 #include "hwmgr.h"
 #include <linux/power_supply.h>
 
-#define WIDTH_4K 3840
-
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
 
@@ -383,18 +381,6 @@ void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 	if (ret)
 		DRM_ERROR("Dpm %s uvd failed, ret = %d. \n",
 			  enable ? "enable" : "disable", ret);
-
-	/* enable/disable Low Memory PState for UVD (4k videos) */
-	if (adev->asic_type == CHIP_STONEY &&
-		adev->uvd.decode_image_width >= WIDTH_4K) {
-		struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
-
-		if (hwmgr && hwmgr->hwmgr_func &&
-		    hwmgr->hwmgr_func->update_nbdpm_pstate)
-			hwmgr->hwmgr_func->update_nbdpm_pstate(hwmgr,
-							       !enable,
-							       true);
-	}
 }
 
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
diff --git a/drivers/gpu/drm/amd/pm/inc/hwmgr.h b/drivers/gpu/drm/amd/pm/inc/hwmgr.h
index 8ed01071fe5a..03226baea65e 100644
--- a/drivers/gpu/drm/amd/pm/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/pm/inc/hwmgr.h
@@ -331,9 +331,6 @@ struct pp_hwmgr_func {
 					uint32_t mc_addr_low,
 					uint32_t mc_addr_hi,
 					uint32_t size);
-	int (*update_nbdpm_pstate)(struct pp_hwmgr *hwmgr,
-					bool enable,
-					bool lock);
 	int (*get_thermal_temperature_range)(struct pp_hwmgr *hwmgr,
 					struct PP_TemperatureRange *range);
 	int (*get_power_profile_mode)(struct pp_hwmgr *hwmgr, char *buf);
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
index 03bf8f069222..b50fd4a4a3d1 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
@@ -1950,9 +1950,12 @@ static void smu8_dpm_powergate_acp(struct pp_hwmgr *hwmgr, bool bgate)
 		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ACPPowerON, NULL);
 }
 
+#define WIDTH_4K		3840
+
 static void smu8_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
 {
 	struct smu8_hwmgr *data = hwmgr->backend;
+	struct amdgpu_device *adev = hwmgr->adev;
 
 	data->uvd_power_gated = bgate;
 
@@ -1976,6 +1979,12 @@ static void smu8_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
 		smu8_dpm_update_uvd_dpm(hwmgr, false);
 	}
 
+	/* enable/disable Low Memory PState for UVD (4k videos) */
+	if (adev->asic_type == CHIP_STONEY &&
+	    adev->uvd.decode_image_width >= WIDTH_4K)
+		smu8_nbdpm_pstate_enable_disable(hwmgr,
+						 bgate,
+						 true);
 }
 
 static void smu8_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
@@ -2037,7 +2046,6 @@ static const struct pp_hwmgr_func smu8_hwmgr_funcs = {
 	.power_state_set = smu8_set_power_state_tasks,
 	.dynamic_state_management_disable = smu8_disable_dpm_tasks,
 	.notify_cac_buffer_info = smu8_notify_cac_buffer_info,
-	.update_nbdpm_pstate = smu8_nbdpm_pstate_enable_disable,
 	.get_thermal_temperature_range = smu8_get_thermal_temperature_range,
 };
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task()
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (9 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 10/17] drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 13:48   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 12/17] drm/amd/pm: drop redundant or unused APIs and data structures Evan Quan
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

We should avoid having multi-function APIs. It should be up to the caller
to determine when or whether to call amdgpu_dpm_dispatch_task().

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I78ec4eb8ceb6e526a4734113d213d15a5fbaa8a4
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 18 ++----------------
 drivers/gpu/drm/amd/pm/amdgpu_pm.c  | 26 ++++++++++++++++++++++++--
 2 files changed, 26 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index c6299e406848..8f0ae58f4292 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -558,8 +558,6 @@ void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
 				enum amd_pm_state_type state)
 {
 	adev->pm.dpm.user_state = state;
-
-	amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
 }
 
 enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
@@ -727,13 +725,7 @@ int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
 	if (!pp_funcs->set_sclk_od)
 		return -EOPNOTSUPP;
 
-	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
-
-	amdgpu_dpm_dispatch_task(adev,
-				 AMD_PP_TASK_READJUST_POWER_STATE,
-				 NULL);
-
-	return 0;
+	return pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
 }
 
 int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
@@ -753,13 +745,7 @@ int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
 	if (!pp_funcs->set_mclk_od)
 		return -EOPNOTSUPP;
 
-	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
-
-	amdgpu_dpm_dispatch_task(adev,
-				 AMD_PP_TASK_READJUST_POWER_STATE,
-				 NULL);
-
-	return 0;
+	return pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
 }
 
 int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index fa2f4e11e94e..89e1134d660f 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -187,6 +187,10 @@ static ssize_t amdgpu_set_power_dpm_state(struct device *dev,
 
 	amdgpu_dpm_set_power_state(adev, state);
 
+	amdgpu_dpm_dispatch_task(adev,
+				 AMD_PP_TASK_ENABLE_USER_STATE,
+				 &state);
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -1278,7 +1282,16 @@ static ssize_t amdgpu_set_pp_sclk_od(struct device *dev,
 		return ret;
 	}
 
-	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
+	ret = amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
+	if (ret) {
+		pm_runtime_mark_last_busy(ddev->dev);
+		pm_runtime_put_autosuspend(ddev->dev);
+		return ret;
+	}
+
+	amdgpu_dpm_dispatch_task(adev,
+				 AMD_PP_TASK_READJUST_POWER_STATE,
+				 NULL);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1340,7 +1353,16 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
 		return ret;
 	}
 
-	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
+	ret = amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
+	if (ret) {
+		pm_runtime_mark_last_busy(ddev->dev);
+		pm_runtime_put_autosuspend(ddev->dev);
+		return ret;
+	}
+
+	amdgpu_dpm_dispatch_task(adev,
+				 AMD_PP_TASK_READJUST_POWER_STATE,
+				 NULL);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 12/17] drm/amd/pm: drop redundant or unused APIs and data structures
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (10 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task() Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power Evan Quan
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Drop those unused APIs and data structures.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I57d2a03dcda02d0b5d9c5ffbdd37bffe49945407
---
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h | 49 -------------------------
 drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h  |  4 ++
 2 files changed, 4 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 5b68f9fe4fde..5c54aad85635 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -88,19 +88,6 @@ struct amdgpu_dpm_thermal {
 	struct amdgpu_irq_src	irq;
 };
 
-enum amdgpu_clk_action
-{
-	AMDGPU_SCLK_UP = 1,
-	AMDGPU_SCLK_DOWN
-};
-
-struct amdgpu_blacklist_clocks
-{
-	u32 sclk;
-	u32 mclk;
-	enum amdgpu_clk_action action;
-};
-
 struct amdgpu_clock_and_voltage_limits {
 	u32 sclk;
 	u32 mclk;
@@ -239,10 +226,6 @@ struct amdgpu_dpm_fan {
 	bool ucode_fan_control;
 };
 
-#define amdgpu_dpm_reset_power_profile_state(adev, request) \
-		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
-			(adev)->powerplay.pp_handle, request))
-
 struct amdgpu_dpm {
 	struct amdgpu_ps        *ps;
 	/* number of valid power states */
@@ -339,35 +322,6 @@ struct amdgpu_pm {
 	bool			pp_force_state_enabled;
 };
 
-#define R600_SSTU_DFLT                               0
-#define R600_SST_DFLT                                0x00C8
-
-/* XXX are these ok? */
-#define R600_TEMP_RANGE_MIN (90 * 1000)
-#define R600_TEMP_RANGE_MAX (120 * 1000)
-
-#define FDO_PWM_MODE_STATIC  1
-#define FDO_PWM_MODE_STATIC_RPM 5
-
-enum amdgpu_td {
-	AMDGPU_TD_AUTO,
-	AMDGPU_TD_UP,
-	AMDGPU_TD_DOWN,
-};
-
-enum amdgpu_display_watermark {
-	AMDGPU_DISPLAY_WATERMARK_LOW = 0,
-	AMDGPU_DISPLAY_WATERMARK_HIGH = 1,
-};
-
-enum amdgpu_display_gap
-{
-    AMDGPU_PM_DISPLAY_GAP_VBLANK_OR_WM = 0,
-    AMDGPU_PM_DISPLAY_GAP_VBLANK       = 1,
-    AMDGPU_PM_DISPLAY_GAP_WATERMARK    = 2,
-    AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
-};
-
 u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
 int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
 			   void *data, uint32_t *size);
@@ -417,9 +371,6 @@ int amdgpu_dpm_smu_i2c_bus_access(struct amdgpu_device *adev,
 
 void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
 
-int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
-			   void *data, uint32_t *size);
-
 void amdgpu_pm_compute_clocks(struct amdgpu_device *adev);
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable);
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
index beea03810bca..67a25da79256 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
@@ -26,6 +26,10 @@
 #include "amdgpu_smu.h"
 
 #if defined(SWSMU_CODE_LAYER_L2) || defined(SWSMU_CODE_LAYER_L3) || defined(SWSMU_CODE_LAYER_L4)
+
+#define FDO_PWM_MODE_STATIC  1
+#define FDO_PWM_MODE_STATIC_RPM 5
+
 int smu_cmn_send_msg_without_waiting(struct smu_context *smu,
 				     uint16_t msg_index,
 				     uint32_t param);
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (11 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 12/17] drm/amd/pm: drop redundant or unused APIs and data structures Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 13:57   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 14/17] drm/amd/pm: relocate the power related headers Evan Quan
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

This can cover the power implementation details. And as what did for
powerplay framework, we hook the smu_context to adev->powerplay.pp_handle.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I3969c9f62a8b63dc6e4321a488d8f15022ffeb3d
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  6 --
 .../gpu/drm/amd/include/kgd_pp_interface.h    |  9 +++
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 51 ++++++++++------
 drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       | 11 +---
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 60 +++++++++++++------
 .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  9 +--
 .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |  9 +--
 .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |  9 +--
 .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  4 +-
 .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  9 +--
 .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  8 +--
 11 files changed, 111 insertions(+), 74 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index c987813a4996..fefabd568483 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -99,7 +99,6 @@
 #include "amdgpu_gem.h"
 #include "amdgpu_doorbell.h"
 #include "amdgpu_amdkfd.h"
-#include "amdgpu_smu.h"
 #include "amdgpu_discovery.h"
 #include "amdgpu_mes.h"
 #include "amdgpu_umc.h"
@@ -950,11 +949,6 @@ struct amdgpu_device {
 
 	/* powerplay */
 	struct amd_powerplay		powerplay;
-
-	/* smu */
-	struct smu_context		smu;
-
-	/* dpm */
 	struct amdgpu_pm		pm;
 	u32				cg_flags;
 	u32				pg_flags;
diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index 7919e96e772b..da6a82430048 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -25,6 +25,9 @@
 #define __KGD_PP_INTERFACE_H__
 
 extern const struct amdgpu_ip_block_version pp_smu_ip_block;
+extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
+extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
+extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
 
 enum smu_event_type {
 	SMU_EVENT_RESET_COMPLETE = 0,
@@ -244,6 +247,12 @@ enum pp_power_type
 	PP_PWR_TYPE_FAST,
 };
 
+enum smu_ppt_limit_type
+{
+	SMU_DEFAULT_PPT_LIMIT = 0,
+	SMU_FAST_PPT_LIMIT,
+};
+
 #define PP_GROUP_MASK        0xF0000000
 #define PP_GROUP_SHIFT       28
 
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 8f0ae58f4292..a5cbbf9367fe 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -31,6 +31,7 @@
 #include "amdgpu_display.h"
 #include "hwmgr.h"
 #include <linux/power_supply.h>
+#include "amdgpu_smu.h"
 
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
@@ -213,7 +214,7 @@ int amdgpu_dpm_baco_reset(struct amdgpu_device *adev)
 
 bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (is_support_sw_smu(adev))
 		return smu_mode1_reset_is_support(smu);
@@ -223,7 +224,7 @@ bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
 
 int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (is_support_sw_smu(adev))
 		return smu_mode1_reset(smu);
@@ -276,7 +277,7 @@ int amdgpu_dpm_set_df_cstate(struct amdgpu_device *adev,
 
 int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device *adev, bool en)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (is_support_sw_smu(adev))
 		return smu_allow_xgmi_power_down(smu, en);
@@ -341,7 +342,7 @@ void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
 		mutex_unlock(&adev->pm.mutex);
 
 		if (is_support_sw_smu(adev))
-			smu_set_ac_dc(&adev->smu);
+			smu_set_ac_dc(adev->powerplay.pp_handle);
 	}
 }
 
@@ -423,15 +424,16 @@ int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_versio
 
 int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
 {
-	return smu_set_light_sbr(&adev->smu, enable);
+	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
 }
 
 int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret = 0;
 
-	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
-		ret = adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, size);
+	if (is_support_sw_smu(adev))
+		ret = smu_send_hbm_bad_pages_num(smu, size);
 
 	return ret;
 }
@@ -446,7 +448,7 @@ int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
 
 	switch (type) {
 	case PP_SCLK:
-		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, min, max);
+		return smu_get_dpm_freq_range(adev->powerplay.pp_handle, SMU_SCLK, min, max);
 	default:
 		return -EINVAL;
 	}
@@ -457,12 +459,14 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
 				   uint32_t min,
 				   uint32_t max)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
 	switch (type) {
 	case PP_SCLK:
-		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK, min, max);
+		return smu_set_soft_freq_range(smu, SMU_SCLK, min, max);
 	default:
 		return -EINVAL;
 	}
@@ -470,33 +474,41 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
 
 int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
 	if (!is_support_sw_smu(adev))
 		return 0;
 
-	return smu_write_watermarks_table(&adev->smu);
+	return smu_write_watermarks_table(smu);
 }
 
 int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
 			      enum smu_event_type event,
 			      uint64_t event_arg)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	return smu_wait_for_event(&adev->smu, event, event_arg);
+	return smu_wait_for_event(smu, event, event_arg);
 }
 
 int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	return smu_get_status_gfxoff(&adev->smu, value);
+	return smu_get_status_gfxoff(smu, value);
 }
 
 uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev)
 {
-	return atomic64_read(&adev->smu.throttle_int_counter);
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
+	return atomic64_read(&smu->throttle_int_counter);
 }
 
 /* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
@@ -518,10 +530,12 @@ void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
 int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
 			    void *umc_ecc)
 {
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	return smu_get_ecc_info(&adev->smu, umc_ecc);
+	return smu_get_ecc_info(smu, umc_ecc);
 }
 
 struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
@@ -919,9 +933,10 @@ int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
 int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
 {
 	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
-	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
-	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
+	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
+	    (is_support_sw_smu(adev) && smu->is_apu) ||
 		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
 		return true;
 
@@ -944,7 +959,9 @@ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
 
 int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
 {
-	return adev->smu.cpu_core_num;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
+	return smu->cpu_core_num;
 }
 
 void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
index 29791bb21fba..f44139b415b4 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
@@ -205,12 +205,6 @@ enum smu_power_src_type
 	SMU_POWER_SOURCE_COUNT,
 };
 
-enum smu_ppt_limit_type
-{
-	SMU_DEFAULT_PPT_LIMIT = 0,
-	SMU_FAST_PPT_LIMIT,
-};
-
 enum smu_ppt_limit_level
 {
 	SMU_PPT_LIMIT_MIN = -1,
@@ -1389,10 +1383,6 @@ int smu_mode1_reset(struct smu_context *smu);
 
 extern const struct amd_ip_funcs smu_ip_funcs;
 
-extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
-extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
-extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
-
 bool is_support_sw_smu(struct amdgpu_device *adev);
 bool is_support_cclk_dpm(struct amdgpu_device *adev);
 int smu_write_watermarks_table(struct smu_context *smu);
@@ -1416,6 +1406,7 @@ int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
 int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
 int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t size);
 void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev);
+int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t size);
 
 #endif
 #endif
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index eaed5aba7547..2c3fd3cfef05 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -468,7 +468,7 @@ bool is_support_sw_smu(struct amdgpu_device *adev)
 
 bool is_support_cclk_dpm(struct amdgpu_device *adev)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
 		return false;
@@ -572,7 +572,7 @@ static int smu_get_driver_allowed_feature_mask(struct smu_context *smu)
 
 static int smu_set_funcs(struct amdgpu_device *adev)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
 		smu->od_enabled = true;
@@ -624,7 +624,11 @@ static int smu_set_funcs(struct amdgpu_device *adev)
 static int smu_early_init(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu;
+
+	smu = kzalloc(sizeof(struct smu_context), GFP_KERNEL);
+	if (!smu)
+		return -ENOMEM;
 
 	smu->adev = adev;
 	smu->pm_enabled = !!amdgpu_dpm;
@@ -684,7 +688,7 @@ static int smu_set_default_dpm_table(struct smu_context *smu)
 static int smu_late_init(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret = 0;
 
 	smu_set_fine_grain_gfx_freq_parameters(smu);
@@ -730,7 +734,7 @@ static int smu_late_init(void *handle)
 
 	smu_get_fan_parameters(smu);
 
-	smu_handle_task(&adev->smu,
+	smu_handle_task(smu,
 			smu->smu_dpm.dpm_level,
 			AMD_PP_TASK_COMPLETE_INIT,
 			false);
@@ -1020,7 +1024,7 @@ static void smu_interrupt_work_fn(struct work_struct *work)
 static int smu_sw_init(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret;
 
 	smu->pool_size = adev->pm.smu_prv_buffer_size;
@@ -1095,7 +1099,7 @@ static int smu_sw_init(void *handle)
 static int smu_sw_fini(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret;
 
 	ret = smu_smc_table_sw_fini(smu);
@@ -1330,7 +1334,7 @@ static int smu_hw_init(void *handle)
 {
 	int ret;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev)) {
 		smu->pm_enabled = false;
@@ -1344,10 +1348,10 @@ static int smu_hw_init(void *handle)
 	}
 
 	if (smu->is_apu) {
-		smu_powergate_sdma(&adev->smu, false);
+		smu_powergate_sdma(smu, false);
 		smu_dpm_set_vcn_enable(smu, true);
 		smu_dpm_set_jpeg_enable(smu, true);
-		smu_set_gfx_cgpg(&adev->smu, true);
+		smu_set_gfx_cgpg(smu, true);
 	}
 
 	if (!smu->pm_enabled)
@@ -1501,13 +1505,13 @@ static int smu_smc_hw_cleanup(struct smu_context *smu)
 static int smu_hw_fini(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
 		return 0;
 
 	if (smu->is_apu) {
-		smu_powergate_sdma(&adev->smu, true);
+		smu_powergate_sdma(smu, true);
 	}
 
 	smu_dpm_set_vcn_enable(smu, false);
@@ -1524,6 +1528,14 @@ static int smu_hw_fini(void *handle)
 	return smu_smc_hw_cleanup(smu);
 }
 
+static void smu_late_fini(void *handle)
+{
+	struct amdgpu_device *adev = handle;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+
+	kfree(smu);
+}
+
 static int smu_reset(struct smu_context *smu)
 {
 	struct amdgpu_device *adev = smu->adev;
@@ -1551,7 +1563,7 @@ static int smu_reset(struct smu_context *smu)
 static int smu_suspend(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret;
 
 	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
@@ -1570,7 +1582,7 @@ static int smu_suspend(void *handle)
 
 	/* skip CGPG when in S0ix */
 	if (smu->is_apu && !adev->in_s0ix)
-		smu_set_gfx_cgpg(&adev->smu, false);
+		smu_set_gfx_cgpg(smu, false);
 
 	return 0;
 }
@@ -1579,7 +1591,7 @@ static int smu_resume(void *handle)
 {
 	int ret;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
 		return 0;
@@ -1602,7 +1614,7 @@ static int smu_resume(void *handle)
 	}
 
 	if (smu->is_apu)
-		smu_set_gfx_cgpg(&adev->smu, true);
+		smu_set_gfx_cgpg(smu, true);
 
 	smu->disable_uclk_switch = 0;
 
@@ -2134,6 +2146,7 @@ const struct amd_ip_funcs smu_ip_funcs = {
 	.sw_fini = smu_sw_fini,
 	.hw_init = smu_hw_init,
 	.hw_fini = smu_hw_fini,
+	.late_fini = smu_late_fini,
 	.suspend = smu_suspend,
 	.resume = smu_resume,
 	.is_idle = NULL,
@@ -3198,7 +3211,7 @@ int smu_stb_collect_info(struct smu_context *smu, void *buf, uint32_t size)
 static int smu_stb_debugfs_open(struct inode *inode, struct file *filp)
 {
 	struct amdgpu_device *adev = filp->f_inode->i_private;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	unsigned char *buf;
 	int r;
 
@@ -3223,7 +3236,7 @@ static ssize_t smu_stb_debugfs_read(struct file *filp, char __user *buf, size_t
 				loff_t *pos)
 {
 	struct amdgpu_device *adev = filp->f_inode->i_private;
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 
 	if (!filp->private_data)
@@ -3264,7 +3277,7 @@ void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev)
 {
 #if defined(CONFIG_DEBUG_FS)
 
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 
 	if (!smu->stb_context.stb_buf_size)
 		return;
@@ -3276,5 +3289,14 @@ void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev)
 			    &smu_stb_debugfs_fops,
 			    smu->stb_context.stb_buf_size);
 #endif
+}
+
+int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t size)
+{
+	int ret = 0;
+
+	if (smu->ppt_funcs->send_hbm_bad_pages_num)
+		ret = smu->ppt_funcs->send_hbm_bad_pages_num(smu, size);
 
+	return ret;
 }
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
index 05defeee0c87..a03bbd2a7aa0 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
@@ -2082,7 +2082,8 @@ static int arcturus_i2c_xfer(struct i2c_adapter *i2c_adap,
 			     struct i2c_msg *msg, int num_msgs)
 {
 	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
-	struct smu_table_context *smu_table = &adev->smu.smu_table;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = &smu_table->driver_table;
 	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
 	int i, j, r, c;
@@ -2128,9 +2129,9 @@ static int arcturus_i2c_xfer(struct i2c_adapter *i2c_adap,
 			}
 		}
 	}
-	mutex_lock(&adev->smu.mutex);
-	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
-	mutex_unlock(&adev->smu.mutex);
+	mutex_lock(&smu->mutex);
+	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
+	mutex_unlock(&smu->mutex);
 	if (r)
 		goto fail;
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
index 2bb7816b245a..37e11716e919 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
@@ -2779,7 +2779,8 @@ static int navi10_i2c_xfer(struct i2c_adapter *i2c_adap,
 			   struct i2c_msg *msg, int num_msgs)
 {
 	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
-	struct smu_table_context *smu_table = &adev->smu.smu_table;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = &smu_table->driver_table;
 	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
 	int i, j, r, c;
@@ -2825,9 +2826,9 @@ static int navi10_i2c_xfer(struct i2c_adapter *i2c_adap,
 			}
 		}
 	}
-	mutex_lock(&adev->smu.mutex);
-	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
-	mutex_unlock(&adev->smu.mutex);
+	mutex_lock(&smu->mutex);
+	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
+	mutex_unlock(&smu->mutex);
 	if (r)
 		goto fail;
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 777f717c37ae..6a5064f4ea86 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -3459,7 +3459,8 @@ static int sienna_cichlid_i2c_xfer(struct i2c_adapter *i2c_adap,
 				   struct i2c_msg *msg, int num_msgs)
 {
 	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
-	struct smu_table_context *smu_table = &adev->smu.smu_table;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = &smu_table->driver_table;
 	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
 	int i, j, r, c;
@@ -3505,9 +3506,9 @@ static int sienna_cichlid_i2c_xfer(struct i2c_adapter *i2c_adap,
 			}
 		}
 	}
-	mutex_lock(&adev->smu.mutex);
-	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
-	mutex_unlock(&adev->smu.mutex);
+	mutex_lock(&smu->mutex);
+	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
+	mutex_unlock(&smu->mutex);
 	if (r)
 		goto fail;
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
index 28b7c0562b99..2a53b5b1d261 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
@@ -1372,7 +1372,7 @@ static int smu_v11_0_set_irq_state(struct amdgpu_device *adev,
 				   unsigned tyep,
 				   enum amdgpu_interrupt_state state)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	uint32_t low, high;
 	uint32_t val = 0;
 
@@ -1441,7 +1441,7 @@ static int smu_v11_0_irq_process(struct amdgpu_device *adev,
 				 struct amdgpu_irq_src *source,
 				 struct amdgpu_iv_entry *entry)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	uint32_t client_id = entry->client_id;
 	uint32_t src_id = entry->src_id;
 	/*
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
index 6e781cee8bb6..3c82f5455f88 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
@@ -1484,7 +1484,8 @@ static int aldebaran_i2c_xfer(struct i2c_adapter *i2c_adap,
 			      struct i2c_msg *msg, int num_msgs)
 {
 	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
-	struct smu_table_context *smu_table = &adev->smu.smu_table;
+	struct smu_context *smu = adev->powerplay.pp_handle;
+	struct smu_table_context *smu_table = &smu->smu_table;
 	struct smu_table *table = &smu_table->driver_table;
 	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
 	int i, j, r, c;
@@ -1530,9 +1531,9 @@ static int aldebaran_i2c_xfer(struct i2c_adapter *i2c_adap,
 			}
 		}
 	}
-	mutex_lock(&adev->smu.mutex);
-	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
-	mutex_unlock(&adev->smu.mutex);
+	mutex_lock(&smu->mutex);
+	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
+	mutex_unlock(&smu->mutex);
 	if (r)
 		goto fail;
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
index 55421ea622fb..4ed01e9d88fb 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
@@ -1195,7 +1195,7 @@ static int smu_v13_0_set_irq_state(struct amdgpu_device *adev,
 				   unsigned tyep,
 				   enum amdgpu_interrupt_state state)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	uint32_t low, high;
 	uint32_t val = 0;
 
@@ -1270,7 +1270,7 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
 				 struct amdgpu_irq_src *source,
 				 struct amdgpu_iv_entry *entry)
 {
-	struct smu_context *smu = &adev->smu;
+	struct smu_context *smu = adev->powerplay.pp_handle;
 	uint32_t client_id = entry->client_id;
 	uint32_t src_id = entry->src_id;
 	/*
@@ -1316,11 +1316,11 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
 			switch (ctxid) {
 			case 0x3:
 				dev_dbg(adev->dev, "Switched to AC mode!\n");
-				smu_v13_0_ack_ac_dc_interrupt(&adev->smu);
+				smu_v13_0_ack_ac_dc_interrupt(smu);
 				break;
 			case 0x4:
 				dev_dbg(adev->dev, "Switched to DC mode!\n");
-				smu_v13_0_ack_ac_dc_interrupt(&adev->smu);
+				smu_v13_0_ack_ac_dc_interrupt(smu);
 				break;
 			case 0x7:
 				/*
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 14/17] drm/amd/pm: relocate the power related headers
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (12 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30 14:07   ` Lazar, Lijo
  2021-11-30  7:42 ` [PATCH V2 15/17] drm/amd/pm: drop unnecessary gfxoff controls Evan Quan
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Instead of centralizing all headers in the same folder. Separate them into
different folders and place them among those source files those who really
need them.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Id74cb4c7006327ca7ecd22daf17321e417c4aa71
---
 drivers/gpu/drm/amd/pm/Makefile               | 10 +++---
 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile    | 32 +++++++++++++++++++
 .../pm/{powerplay => legacy-dpm}/cik_dpm.h    |  0
 .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.c |  0
 .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.h |  0
 .../amd/pm/{powerplay => legacy-dpm}/kv_smc.c |  0
 .../pm/{powerplay => legacy-dpm}/legacy_dpm.c |  0
 .../pm/{powerplay => legacy-dpm}/legacy_dpm.h |  0
 .../amd/pm/{powerplay => legacy-dpm}/ppsmc.h  |  0
 .../pm/{powerplay => legacy-dpm}/r600_dpm.h   |  0
 .../amd/pm/{powerplay => legacy-dpm}/si_dpm.c |  0
 .../amd/pm/{powerplay => legacy-dpm}/si_dpm.h |  0
 .../amd/pm/{powerplay => legacy-dpm}/si_smc.c |  0
 .../{powerplay => legacy-dpm}/sislands_smc.h  |  0
 drivers/gpu/drm/amd/pm/powerplay/Makefile     |  6 +---
 .../pm/{ => powerplay}/inc/amd_powerplay.h    |  0
 .../drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h |  0
 .../amd/pm/{ => powerplay}/inc/fiji_ppsmc.h   |  0
 .../pm/{ => powerplay}/inc/hardwaremanager.h  |  0
 .../drm/amd/pm/{ => powerplay}/inc/hwmgr.h    |  0
 .../{ => powerplay}/inc/polaris10_pwrvirus.h  |  0
 .../amd/pm/{ => powerplay}/inc/power_state.h  |  0
 .../drm/amd/pm/{ => powerplay}/inc/pp_debug.h |  0
 .../amd/pm/{ => powerplay}/inc/pp_endian.h    |  0
 .../amd/pm/{ => powerplay}/inc/pp_thermal.h   |  0
 .../amd/pm/{ => powerplay}/inc/ppinterrupt.h  |  0
 .../drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu10.h    |  0
 .../pm/{ => powerplay}/inc/smu10_driver_if.h  |  0
 .../pm/{ => powerplay}/inc/smu11_driver_if.h  |  0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu71.h    |  0
 .../pm/{ => powerplay}/inc/smu71_discrete.h   |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu72.h    |  0
 .../pm/{ => powerplay}/inc/smu72_discrete.h   |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu73.h    |  0
 .../pm/{ => powerplay}/inc/smu73_discrete.h   |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu74.h    |  0
 .../pm/{ => powerplay}/inc/smu74_discrete.h   |  0
 .../drm/amd/pm/{ => powerplay}/inc/smu75.h    |  0
 .../pm/{ => powerplay}/inc/smu75_discrete.h   |  0
 .../amd/pm/{ => powerplay}/inc/smu7_common.h  |  0
 .../pm/{ => powerplay}/inc/smu7_discrete.h    |  0
 .../amd/pm/{ => powerplay}/inc/smu7_fusion.h  |  0
 .../amd/pm/{ => powerplay}/inc/smu7_ppsmc.h   |  0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h |  0
 .../amd/pm/{ => powerplay}/inc/smu8_fusion.h  |  0
 .../gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h |  0
 .../pm/{ => powerplay}/inc/smu9_driver_if.h   |  0
 .../{ => powerplay}/inc/smu_ucode_xfer_cz.h   |  0
 .../{ => powerplay}/inc/smu_ucode_xfer_vi.h   |  0
 .../drm/amd/pm/{ => powerplay}/inc/smumgr.h   |  0
 .../amd/pm/{ => powerplay}/inc/tonga_ppsmc.h  |  0
 .../amd/pm/{ => powerplay}/inc/vega10_ppsmc.h |  0
 .../inc/vega12/smu9_driver_if.h               |  0
 .../amd/pm/{ => powerplay}/inc/vega12_ppsmc.h |  0
 .../amd/pm/{ => powerplay}/inc/vega20_ppsmc.h |  0
 .../amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h  |  0
 .../drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h   |  0
 .../amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h   |  0
 .../inc/smu11_driver_if_arcturus.h            |  0
 .../inc/smu11_driver_if_cyan_skillfish.h      |  0
 .../{ => swsmu}/inc/smu11_driver_if_navi10.h  |  0
 .../inc/smu11_driver_if_sienna_cichlid.h      |  0
 .../{ => swsmu}/inc/smu11_driver_if_vangogh.h |  0
 .../amd/pm/{ => swsmu}/inc/smu12_driver_if.h  |  0
 .../inc/smu13_driver_if_aldebaran.h           |  0
 .../inc/smu13_driver_if_yellow_carp.h         |  0
 .../pm/{ => swsmu}/inc/smu_11_0_cdr_table.h   |  0
 .../drm/amd/pm/{ => swsmu}/inc/smu_types.h    |  0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h    |  0
 .../pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h    |  0
 .../pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h  |  0
 .../amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h  |  0
 .../pm/{ => swsmu}/inc/smu_v11_0_pptable.h    |  0
 .../amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h   |  0
 .../amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h  |  0
 .../amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h   |  0
 .../amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h  |  0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h    |  0
 .../amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h  |  0
 .../drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h    |  0
 .../amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h |  0
 .../pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h    |  0
 .../pm/{ => swsmu}/inc/smu_v13_0_pptable.h    |  0
 .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  1 -
 .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  1 -
 87 files changed, 39 insertions(+), 11 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/cik_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_smc.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/legacy_dpm.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/legacy_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/r600_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.h (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_smc.c (100%)
 rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/sislands_smc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/amd_powerplay.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/fiji_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hardwaremanager.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hwmgr.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/polaris10_pwrvirus.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/power_state.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_debug.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_endian.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_thermal.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/ppinterrupt.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu11_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_common.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_discrete.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_fusion.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8_fusion.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_cz.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_vi.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smumgr.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/tonga_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega10_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12/smu9_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega20_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_arcturus.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_cyan_skillfish.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_navi10.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_sienna_cichlid.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_vangogh.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu12_driver_if.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_aldebaran.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_yellow_carp.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_11_0_cdr_table.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_types.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_pptable.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h (100%)
 rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_pptable.h (100%)

diff --git a/drivers/gpu/drm/amd/pm/Makefile b/drivers/gpu/drm/amd/pm/Makefile
index d35ffde387f1..84c7203b5e46 100644
--- a/drivers/gpu/drm/amd/pm/Makefile
+++ b/drivers/gpu/drm/amd/pm/Makefile
@@ -21,20 +21,22 @@
 #
 
 subdir-ccflags-y += \
-		-I$(FULL_AMD_PATH)/pm/inc/  \
 		-I$(FULL_AMD_PATH)/include/asic_reg  \
 		-I$(FULL_AMD_PATH)/include  \
+		-I$(FULL_AMD_PATH)/pm/inc/  \
 		-I$(FULL_AMD_PATH)/pm/swsmu \
+		-I$(FULL_AMD_PATH)/pm/swsmu/inc \
 		-I$(FULL_AMD_PATH)/pm/swsmu/smu11 \
 		-I$(FULL_AMD_PATH)/pm/swsmu/smu12 \
 		-I$(FULL_AMD_PATH)/pm/swsmu/smu13 \
-		-I$(FULL_AMD_PATH)/pm/powerplay \
+		-I$(FULL_AMD_PATH)/pm/powerplay/inc \
 		-I$(FULL_AMD_PATH)/pm/powerplay/smumgr\
-		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr
+		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr \
+		-I$(FULL_AMD_PATH)/pm/legacy-dpm
 
 AMD_PM_PATH = ../pm
 
-PM_LIBS = swsmu powerplay
+PM_LIBS = swsmu powerplay legacy-dpm
 
 AMD_PM = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/,$(PM_LIBS)))
 
diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
new file mode 100644
index 000000000000..baa4265d1daa
--- /dev/null
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
@@ -0,0 +1,32 @@
+#
+# Copyright 2021 Advanced Micro Devices, Inc.
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the "Software"),
+# to deal in the Software without restriction, including without limitation
+# the rights to use, copy, modify, merge, publish, distribute, sublicense,
+# and/or sell copies of the Software, and to permit persons to whom the
+# Software is furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+# THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+
+AMD_LEGACYDPM_PATH = ../pm/legacy-dpm
+
+LEGACYDPM_MGR-y = legacy_dpm.o
+
+LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
+LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
+
+AMD_LEGACYDPM_POWER = $(addprefix $(AMD_LEGACYDPM_PATH)/,$(LEGACYDPM_MGR-y))
+
+AMD_POWERPLAY_FILES += $(AMD_LEGACYDPM_POWER)
diff --git a/drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_smc.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/kv_smc.c
rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/ppsmc.h b/drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/ppsmc.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_smc.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/si_smc.c
rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
diff --git a/drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h b/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h
rename to drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile b/drivers/gpu/drm/amd/pm/powerplay/Makefile
index 614d8b6a58ad..795a3624cbbf 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
+++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
@@ -28,11 +28,7 @@ AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
 
 include $(AMD_POWERPLAY)
 
-POWER_MGR-y = amd_powerplay.o legacy_dpm.o
-
-POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
-
-POWER_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
+POWER_MGR-y = amd_powerplay.o
 
 AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR-y))
 
diff --git a/drivers/gpu/drm/amd/pm/inc/amd_powerplay.h b/drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/amd_powerplay.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
diff --git a/drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/hardwaremanager.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/hardwaremanager.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
diff --git a/drivers/gpu/drm/amd/pm/inc/hwmgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/hwmgr.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
diff --git a/drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h b/drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
diff --git a/drivers/gpu/drm/amd/pm/inc/power_state.h b/drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/power_state.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
diff --git a/drivers/gpu/drm/amd/pm/inc/pp_debug.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/pp_debug.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
diff --git a/drivers/gpu/drm/amd/pm/inc/pp_endian.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/pp_endian.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
diff --git a/drivers/gpu/drm/amd/pm/inc/pp_thermal.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/pp_thermal.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
diff --git a/drivers/gpu/drm/amd/pm/inc/ppinterrupt.h b/drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/ppinterrupt.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
diff --git a/drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu10.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu10.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu7.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu7.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu71.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu71.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu71_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu71_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu72.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu72.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu72_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu72_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu73.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu73.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu73_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu73_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu74.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu74.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu74_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu74_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu75.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu75.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu75_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu75_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_common.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu7_common.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu7_discrete.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_fusion.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu7_fusion.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu8.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu8.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu8_fusion.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu8_fusion.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu9.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu9.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smumgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smumgr.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
diff --git a/drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
diff --git a/drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h
rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
diff --git a/drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_types.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_types.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
similarity index 100%
rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h
rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
index a03bbd2a7aa0..1e6d76657bbb 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
@@ -33,7 +33,6 @@
 #include "smu11_driver_if_arcturus.h"
 #include "soc15_common.h"
 #include "atom.h"
-#include "power_state.h"
 #include "arcturus_ppt.h"
 #include "smu_v11_0_pptable.h"
 #include "arcturus_ppsmc.h"
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
index 3c82f5455f88..cc502a35f9ef 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
@@ -33,7 +33,6 @@
 #include "smu13_driver_if_aldebaran.h"
 #include "soc15_common.h"
 #include "atom.h"
-#include "power_state.h"
 #include "aldebaran_ppt.h"
 #include "smu_v13_0_pptable.h"
 #include "aldebaran_ppsmc.h"
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 15/17] drm/amd/pm: drop unnecessary gfxoff controls
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (13 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 14/17] drm/amd/pm: relocate the power related headers Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 16/17] drm/amd/pm: revise the performance level setting APIs Evan Quan
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Those gfxoff controls added for some specific ASICs are unnecessary.
The functionalities are not affected without them. Also to align with
other ASICs, they should also be dropped.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Ia8475ef9e97635441aca5e0a7693e2a515498523
---
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  4 ---
 .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 25 +------------------
 .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  7 ------
 .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  7 ------
 4 files changed, 1 insertion(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 2c3fd3cfef05..6f5a6886d3cc 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -1541,8 +1541,6 @@ static int smu_reset(struct smu_context *smu)
 	struct amdgpu_device *adev = smu->adev;
 	int ret;
 
-	amdgpu_gfx_off_ctrl(smu->adev, false);
-
 	ret = smu_hw_fini(adev);
 	if (ret)
 		return ret;
@@ -1555,8 +1553,6 @@ static int smu_reset(struct smu_context *smu)
 	if (ret)
 		return ret;
 
-	amdgpu_gfx_off_ctrl(smu->adev, true);
-
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 6a5064f4ea86..9766870987db 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -1036,10 +1036,6 @@ static int sienna_cichlid_print_clk_levels(struct smu_context *smu,
 		if (ret)
 			goto print_clk_out;
 
-		/* no need to disable gfxoff when retrieving the current gfxclk */
-		if ((clk_type == SMU_GFXCLK) || (clk_type == SMU_SCLK))
-			amdgpu_gfx_off_ctrl(adev, false);
-
 		ret = smu_v11_0_get_dpm_level_count(smu, clk_type, &count);
 		if (ret)
 			goto print_clk_out;
@@ -1168,25 +1164,18 @@ static int sienna_cichlid_print_clk_levels(struct smu_context *smu,
 	}
 
 print_clk_out:
-	if ((clk_type == SMU_GFXCLK) || (clk_type == SMU_SCLK))
-		amdgpu_gfx_off_ctrl(adev, true);
-
 	return size;
 }
 
 static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
 				   enum smu_clk_type clk_type, uint32_t mask)
 {
-	struct amdgpu_device *adev = smu->adev;
 	int ret = 0;
 	uint32_t soft_min_level = 0, soft_max_level = 0, min_freq = 0, max_freq = 0;
 
 	soft_min_level = mask ? (ffs(mask) - 1) : 0;
 	soft_max_level = mask ? (fls(mask) - 1) : 0;
 
-	if ((clk_type == SMU_GFXCLK) || (clk_type == SMU_SCLK))
-		amdgpu_gfx_off_ctrl(adev, false);
-
 	switch (clk_type) {
 	case SMU_GFXCLK:
 	case SMU_SCLK:
@@ -1220,9 +1209,6 @@ static int sienna_cichlid_force_clk_levels(struct smu_context *smu,
 	}
 
 forec_level_out:
-	if ((clk_type == SMU_GFXCLK) || (clk_type == SMU_SCLK))
-		amdgpu_gfx_off_ctrl(adev, true);
-
 	return 0;
 }
 
@@ -1865,16 +1851,7 @@ static int sienna_cichlid_get_dpm_ultimate_freq(struct smu_context *smu,
 				enum smu_clk_type clk_type,
 				uint32_t *min, uint32_t *max)
 {
-	struct amdgpu_device *adev = smu->adev;
-	int ret;
-
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, false);
-	ret = smu_v11_0_get_dpm_ultimate_freq(smu, clk_type, min, max);
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, true);
-
-	return ret;
+	return smu_v11_0_get_dpm_ultimate_freq(smu, clk_type, min, max);
 }
 
 static void sienna_cichlid_dump_od_table(struct smu_context *smu,
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
index 2a53b5b1d261..fd188ee3ab54 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
@@ -1798,7 +1798,6 @@ int smu_v11_0_set_soft_freq_limited_range(struct smu_context *smu,
 					  uint32_t min,
 					  uint32_t max)
 {
-	struct amdgpu_device *adev = smu->adev;
 	int ret = 0, clk_id = 0;
 	uint32_t param;
 
@@ -1811,9 +1810,6 @@ int smu_v11_0_set_soft_freq_limited_range(struct smu_context *smu,
 	if (clk_id < 0)
 		return clk_id;
 
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, false);
-
 	if (max > 0) {
 		param = (uint32_t)((clk_id << 16) | (max & 0xffff));
 		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxByFreq,
@@ -1831,9 +1827,6 @@ int smu_v11_0_set_soft_freq_limited_range(struct smu_context *smu,
 	}
 
 out:
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, true);
-
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
index 4ed01e9d88fb..1635916be851 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
@@ -1528,7 +1528,6 @@ int smu_v13_0_set_soft_freq_limited_range(struct smu_context *smu,
 					  uint32_t min,
 					  uint32_t max)
 {
-	struct amdgpu_device *adev = smu->adev;
 	int ret = 0, clk_id = 0;
 	uint32_t param;
 
@@ -1541,9 +1540,6 @@ int smu_v13_0_set_soft_freq_limited_range(struct smu_context *smu,
 	if (clk_id < 0)
 		return clk_id;
 
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, false);
-
 	if (max > 0) {
 		param = (uint32_t)((clk_id << 16) | (max & 0xffff));
 		ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetSoftMaxByFreq,
@@ -1561,9 +1557,6 @@ int smu_v13_0_set_soft_freq_limited_range(struct smu_context *smu,
 	}
 
 out:
-	if (clk_type == SMU_GFXCLK)
-		amdgpu_gfx_off_ctrl(adev, true);
-
 	return ret;
 }
 
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 16/17] drm/amd/pm: revise the performance level setting APIs
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (14 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 15/17] drm/amd/pm: drop unnecessary gfxoff controls Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  7:42 ` [PATCH V2 17/17] drm/amd/pm: unified lock protections in amdgpu_dpm.c Evan Quan
  2021-11-30  9:58 ` [PATCH V2 00/17] Unified entry point for other blocks to interact with power Christian König
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

Avoid cross callings which make lock protection enforcement
on amdgpu_dpm_force_performance_level() impossible.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Ie658140f40ab906ce2ec47576a086062b61076a6
---
 drivers/gpu/drm/amd/pm/amdgpu_pm.c            | 29 ++++++++++++++++---
 .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c    | 17 ++++++-----
 .../gpu/drm/amd/pm/powerplay/amd_powerplay.c  | 12 --------
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 12 --------
 4 files changed, 34 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index 89e1134d660f..28150267832c 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -305,6 +305,10 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 	enum amd_dpm_forced_level level;
 	enum amd_dpm_forced_level current_level;
 	int ret = 0;
+	uint32_t profile_mode_mask = AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD |
+					AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK |
+					AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK |
+					AMD_DPM_FORCED_LEVEL_PROFILE_PEAK;
 
 	if (amdgpu_in_reset(adev))
 		return -EPERM;
@@ -358,10 +362,7 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 	}
 
 	/* profile_exit setting is valid only when current mode is in profile mode */
-	if (!(current_level & (AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD |
-	    AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK |
-	    AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK |
-	    AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)) &&
+	if (!(current_level & profile_mode_mask) &&
 	    (level == AMD_DPM_FORCED_LEVEL_PROFILE_EXIT)) {
 		pr_err("Currently not in any profile mode!\n");
 		pm_runtime_mark_last_busy(ddev->dev);
@@ -369,6 +370,26 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 		return -EINVAL;
 	}
 
+	if (!(current_level & profile_mode_mask) &&
+	      (level & profile_mode_mask)) {
+		/* enter UMD Pstate */
+		amdgpu_device_ip_set_powergating_state(adev,
+						       AMD_IP_BLOCK_TYPE_GFX,
+						       AMD_PG_STATE_UNGATE);
+		amdgpu_device_ip_set_clockgating_state(adev,
+						       AMD_IP_BLOCK_TYPE_GFX,
+						       AMD_CG_STATE_UNGATE);
+	} else if ((current_level & profile_mode_mask) &&
+		    !(level & profile_mode_mask)) {
+		/* exit UMD Pstate */
+		amdgpu_device_ip_set_clockgating_state(adev,
+						       AMD_IP_BLOCK_TYPE_GFX,
+						       AMD_CG_STATE_GATE);
+		amdgpu_device_ip_set_powergating_state(adev,
+						       AMD_IP_BLOCK_TYPE_GFX,
+						       AMD_PG_STATE_GATE);
+	}
+
 	if (amdgpu_dpm_force_performance_level(adev, level)) {
 		pm_runtime_mark_last_busy(ddev->dev);
 		pm_runtime_put_autosuspend(ddev->dev);
diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
index 9e6bc562fc5a..8d5916087861 100644
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
@@ -1382,6 +1382,7 @@ static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
 
 static int amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 {
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct amdgpu_ps *ps;
 	enum amd_pm_state_type dpm_state;
 	int ret;
@@ -1405,7 +1406,7 @@ static int amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 	else
 		return -EINVAL;
 
-	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs->print_power_state) {
+	if (amdgpu_dpm == 1 && pp_funcs->print_power_state) {
 		printk("switching from power state:\n");
 		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.current_ps);
 		printk("switching to power state:\n");
@@ -1414,14 +1415,14 @@ static int amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 
 	/* update whether vce is active */
 	ps->vce_active = adev->pm.dpm.vce_active;
-	if (adev->powerplay.pp_funcs->display_configuration_changed)
+	if (pp_funcs->display_configuration_changed)
 		amdgpu_dpm_display_configuration_changed(adev);
 
 	ret = amdgpu_dpm_pre_set_power_state(adev);
 	if (ret)
 		return ret;
 
-	if (adev->powerplay.pp_funcs->check_state_equal) {
+	if (pp_funcs->check_state_equal) {
 		if (0 != amdgpu_dpm_check_state_equal(adev, adev->pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
 			equal = false;
 	}
@@ -1429,24 +1430,24 @@ static int amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 	if (equal)
 		return 0;
 
-	if (adev->powerplay.pp_funcs->set_power_state)
-		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
+	if (pp_funcs->set_power_state)
+		pp_funcs->set_power_state(adev->powerplay.pp_handle);
 
 	amdgpu_dpm_post_set_power_state(adev);
 
 	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
 	adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
 
-	if (adev->powerplay.pp_funcs->force_performance_level) {
+	if (pp_funcs->force_performance_level) {
 		if (adev->pm.dpm.thermal_active) {
 			enum amd_dpm_forced_level level = adev->pm.dpm.forced_level;
 			/* force low perf level for thermal */
-			amdgpu_dpm_force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
+			pp_funcs->force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
 			/* save the user's level */
 			adev->pm.dpm.forced_level = level;
 		} else {
 			/* otherwise, user selected level */
-			amdgpu_dpm_force_performance_level(adev, adev->pm.dpm.forced_level);
+			pp_funcs->force_performance_level(adev, adev->pm.dpm.forced_level);
 		}
 	}
 
diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
index d57d5c28c013..5a14ddd3ef05 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
@@ -323,12 +323,6 @@ static void pp_dpm_en_umd_pstate(struct pp_hwmgr  *hwmgr,
 		if (*level & profile_mode_mask) {
 			hwmgr->saved_dpm_level = hwmgr->dpm_level;
 			hwmgr->en_umd_pstate = true;
-			amdgpu_device_ip_set_powergating_state(hwmgr->adev,
-					AMD_IP_BLOCK_TYPE_GFX,
-					AMD_PG_STATE_UNGATE);
-			amdgpu_device_ip_set_clockgating_state(hwmgr->adev,
-						AMD_IP_BLOCK_TYPE_GFX,
-						AMD_CG_STATE_UNGATE);
 		}
 	} else {
 		/* exit umd pstate, restore level, enable gfx cg*/
@@ -336,12 +330,6 @@ static void pp_dpm_en_umd_pstate(struct pp_hwmgr  *hwmgr,
 			if (*level == AMD_DPM_FORCED_LEVEL_PROFILE_EXIT)
 				*level = hwmgr->saved_dpm_level;
 			hwmgr->en_umd_pstate = false;
-			amdgpu_device_ip_set_clockgating_state(hwmgr->adev,
-					AMD_IP_BLOCK_TYPE_GFX,
-					AMD_CG_STATE_GATE);
-			amdgpu_device_ip_set_powergating_state(hwmgr->adev,
-					AMD_IP_BLOCK_TYPE_GFX,
-					AMD_PG_STATE_GATE);
 		}
 	}
 }
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index 6f5a6886d3cc..241eebef9939 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -1681,12 +1681,6 @@ static int smu_enable_umd_pstate(void *handle,
 			smu_dpm_ctx->saved_dpm_level = smu_dpm_ctx->dpm_level;
 			smu_dpm_ctx->enable_umd_pstate = true;
 			smu_gpo_control(smu, false);
-			amdgpu_device_ip_set_powergating_state(smu->adev,
-							       AMD_IP_BLOCK_TYPE_GFX,
-							       AMD_PG_STATE_UNGATE);
-			amdgpu_device_ip_set_clockgating_state(smu->adev,
-							       AMD_IP_BLOCK_TYPE_GFX,
-							       AMD_CG_STATE_UNGATE);
 			smu_gfx_ulv_control(smu, false);
 			smu_deep_sleep_control(smu, false);
 			amdgpu_asic_update_umd_stable_pstate(smu->adev, true);
@@ -1700,12 +1694,6 @@ static int smu_enable_umd_pstate(void *handle,
 			amdgpu_asic_update_umd_stable_pstate(smu->adev, false);
 			smu_deep_sleep_control(smu, true);
 			smu_gfx_ulv_control(smu, true);
-			amdgpu_device_ip_set_clockgating_state(smu->adev,
-							       AMD_IP_BLOCK_TYPE_GFX,
-							       AMD_CG_STATE_GATE);
-			amdgpu_device_ip_set_powergating_state(smu->adev,
-							       AMD_IP_BLOCK_TYPE_GFX,
-							       AMD_PG_STATE_GATE);
 			smu_gpo_control(smu, true);
 		}
 	}
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH V2 17/17] drm/amd/pm: unified lock protections in amdgpu_dpm.c
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (15 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 16/17] drm/amd/pm: revise the performance level setting APIs Evan Quan
@ 2021-11-30  7:42 ` Evan Quan
  2021-11-30  9:58 ` [PATCH V2 00/17] Unified entry point for other blocks to interact with power Christian König
  17 siblings, 0 replies; 44+ messages in thread
From: Evan Quan @ 2021-11-30  7:42 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng, christian.koenig, Evan Quan

As the only entry point, it's now safe and reasonable to
enforce the lock protections in amdgpu_dpm.c. And with
this, we can drop other internal used power locks.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: Iad228cad0b3d8c41927def08965a52525f3f51d3
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c        | 716 +++++++++++++++------
 drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c |  16 +-
 drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c |  16 +-
 3 files changed, 533 insertions(+), 215 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index a5cbbf9367fe..c59a82478fd8 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -39,15 +39,33 @@
 int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
+
+	if (!pp_funcs->get_sclk)
+		return 0;
 
-	return pp_funcs->get_sclk((adev)->powerplay.pp_handle, (low));
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_sclk((adev)->powerplay.pp_handle,
+				 low);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_mclk(struct amdgpu_device *adev, bool low)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
+
+	if (!pp_funcs->get_mclk)
+		return 0;
+
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_mclk((adev)->powerplay.pp_handle,
+				 low);
+	mutex_unlock(&adev->pm.mutex);
 
-	return pp_funcs->get_mclk((adev)->powerplay.pp_handle, (low));
+	return ret;
 }
 
 int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev, uint32_t block_type, bool gate)
@@ -62,52 +80,20 @@ int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev, uint32_t block
 		return 0;
 	}
 
+	mutex_lock(&adev->pm.mutex);
+
 	switch (block_type) {
 	case AMD_IP_BLOCK_TYPE_UVD:
 	case AMD_IP_BLOCK_TYPE_VCE:
-		if (pp_funcs && pp_funcs->set_powergating_by_smu) {
-			/*
-			 * TODO: need a better lock mechanism
-			 *
-			 * Here adev->pm.mutex lock protection is enforced on
-			 * UVD and VCE cases only. Since for other cases, there
-			 * may be already lock protection in amdgpu_pm.c.
-			 * This is a quick fix for the deadlock issue below.
-			 *     NFO: task ocltst:2028 blocked for more than 120 seconds.
-			 *     Tainted: G           OE     5.0.0-37-generic #40~18.04.1-Ubuntu
-			 *     echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
-			 *     cltst          D    0  2028   2026 0x00000000
-			 *     all Trace:
-			 *     __schedule+0x2c0/0x870
-			 *     schedule+0x2c/0x70
-			 *     schedule_preempt_disabled+0xe/0x10
-			 *     __mutex_lock.isra.9+0x26d/0x4e0
-			 *     __mutex_lock_slowpath+0x13/0x20
-			 *     ? __mutex_lock_slowpath+0x13/0x20
-			 *     mutex_lock+0x2f/0x40
-			 *     amdgpu_dpm_set_powergating_by_smu+0x64/0xe0 [amdgpu]
-			 *     gfx_v8_0_enable_gfx_static_mg_power_gating+0x3c/0x70 [amdgpu]
-			 *     gfx_v8_0_set_powergating_state+0x66/0x260 [amdgpu]
-			 *     amdgpu_device_ip_set_powergating_state+0x62/0xb0 [amdgpu]
-			 *     pp_dpm_force_performance_level+0xe7/0x100 [amdgpu]
-			 *     amdgpu_set_dpm_forced_performance_level+0x129/0x330 [amdgpu]
-			 */
-			mutex_lock(&adev->pm.mutex);
-			ret = (pp_funcs->set_powergating_by_smu(
-				(adev)->powerplay.pp_handle, block_type, gate));
-			mutex_unlock(&adev->pm.mutex);
-		}
-		break;
 	case AMD_IP_BLOCK_TYPE_GFX:
 	case AMD_IP_BLOCK_TYPE_VCN:
 	case AMD_IP_BLOCK_TYPE_SDMA:
 	case AMD_IP_BLOCK_TYPE_JPEG:
 	case AMD_IP_BLOCK_TYPE_GMC:
 	case AMD_IP_BLOCK_TYPE_ACP:
-		if (pp_funcs && pp_funcs->set_powergating_by_smu) {
+		if (pp_funcs && pp_funcs->set_powergating_by_smu)
 			ret = (pp_funcs->set_powergating_by_smu(
 				(adev)->powerplay.pp_handle, block_type, gate));
-		}
 		break;
 	default:
 		break;
@@ -116,6 +102,8 @@ int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev, uint32_t block
 	if (!ret)
 		atomic_set(&adev->pm.pwr_state[block_type], pwr_state);
 
+	mutex_unlock(&adev->pm.mutex);
+
 	return ret;
 }
 
@@ -128,9 +116,13 @@ int amdgpu_dpm_baco_enter(struct amdgpu_device *adev)
 	if (!pp_funcs || !pp_funcs->set_asic_baco_state)
 		return -ENOENT;
 
+	mutex_lock(&adev->pm.mutex);
+
 	/* enter BACO state */
 	ret = pp_funcs->set_asic_baco_state(pp_handle, 1);
 
+	mutex_unlock(&adev->pm.mutex);
+
 	return ret;
 }
 
@@ -143,9 +135,13 @@ int amdgpu_dpm_baco_exit(struct amdgpu_device *adev)
 	if (!pp_funcs || !pp_funcs->set_asic_baco_state)
 		return -ENOENT;
 
+	mutex_lock(&adev->pm.mutex);
+
 	/* exit BACO state */
 	ret = pp_funcs->set_asic_baco_state(pp_handle, 0);
 
+	mutex_unlock(&adev->pm.mutex);
+
 	return ret;
 }
 
@@ -156,9 +152,13 @@ int amdgpu_dpm_set_mp1_state(struct amdgpu_device *adev,
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
 	if (pp_funcs && pp_funcs->set_mp1_state) {
+		mutex_lock(&adev->pm.mutex);
+
 		ret = pp_funcs->set_mp1_state(
 				adev->powerplay.pp_handle,
 				mp1_state);
+
+		mutex_unlock(&adev->pm.mutex);
 	}
 
 	return ret;
@@ -169,25 +169,37 @@ bool amdgpu_dpm_is_baco_supported(struct amdgpu_device *adev)
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	void *pp_handle = adev->powerplay.pp_handle;
 	bool baco_cap;
+	int ret = 0;
 
 	if (!pp_funcs || !pp_funcs->get_asic_baco_capability)
 		return false;
 
-	if (pp_funcs->get_asic_baco_capability(pp_handle, &baco_cap))
-		return false;
+	mutex_lock(&adev->pm.mutex);
+
+	ret = pp_funcs->get_asic_baco_capability(pp_handle,
+						 &baco_cap);
 
-	return baco_cap;
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret ? false : baco_cap;
 }
 
 int amdgpu_dpm_mode2_reset(struct amdgpu_device *adev)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	void *pp_handle = adev->powerplay.pp_handle;
+	int ret = 0;
 
 	if (!pp_funcs || !pp_funcs->asic_reset_mode_2)
 		return -ENOENT;
 
-	return pp_funcs->asic_reset_mode_2(pp_handle);
+	mutex_lock(&adev->pm.mutex);
+
+	ret = pp_funcs->asic_reset_mode_2(pp_handle);
+
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_baco_reset(struct amdgpu_device *adev)
@@ -199,37 +211,47 @@ int amdgpu_dpm_baco_reset(struct amdgpu_device *adev)
 	if (!pp_funcs || !pp_funcs->set_asic_baco_state)
 		return -ENOENT;
 
+	mutex_lock(&adev->pm.mutex);
+
 	/* enter BACO state */
 	ret = pp_funcs->set_asic_baco_state(pp_handle, 1);
 	if (ret)
-		return ret;
+		goto out;
 
 	/* exit BACO state */
 	ret = pp_funcs->set_asic_baco_state(pp_handle, 0);
-	if (ret)
-		return ret;
 
-	return 0;
+out:
+	mutex_unlock(&adev->pm.mutex);
+	return ret;
 }
 
 bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	bool support_mode1_reset = false;
 
-	if (is_support_sw_smu(adev))
-		return smu_mode1_reset_is_support(smu);
+	if (is_support_sw_smu(adev)) {
+		mutex_lock(&adev->pm.mutex);
+		support_mode1_reset = smu_mode1_reset_is_support(smu);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
-	return false;
+	return support_mode1_reset;
 }
 
 int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = -EOPNOTSUPP;
 
-	if (is_support_sw_smu(adev))
-		return smu_mode1_reset(smu);
+	if (is_support_sw_smu(adev)) {
+		mutex_lock(&adev->pm.mutex);
+		ret = smu_mode1_reset(smu);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
-	return -EOPNOTSUPP;
+	return ret;
 }
 
 int amdgpu_dpm_switch_power_profile(struct amdgpu_device *adev,
@@ -242,9 +264,12 @@ int amdgpu_dpm_switch_power_profile(struct amdgpu_device *adev,
 	if (amdgpu_sriov_vf(adev))
 		return 0;
 
-	if (pp_funcs && pp_funcs->switch_power_profile)
+	if (pp_funcs && pp_funcs->switch_power_profile) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->switch_power_profile(
 			adev->powerplay.pp_handle, type, en);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -255,9 +280,12 @@ int amdgpu_dpm_set_xgmi_pstate(struct amdgpu_device *adev,
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	int ret = 0;
 
-	if (pp_funcs && pp_funcs->set_xgmi_pstate)
+	if (pp_funcs && pp_funcs->set_xgmi_pstate) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->set_xgmi_pstate(adev->powerplay.pp_handle,
 								pstate);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -269,8 +297,11 @@ int amdgpu_dpm_set_df_cstate(struct amdgpu_device *adev,
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	void *pp_handle = adev->powerplay.pp_handle;
 
-	if (pp_funcs && pp_funcs->set_df_cstate)
+	if (pp_funcs && pp_funcs->set_df_cstate) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->set_df_cstate(pp_handle, cstate);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -278,11 +309,15 @@ int amdgpu_dpm_set_df_cstate(struct amdgpu_device *adev,
 int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device *adev, bool en)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = 0;
 
-	if (is_support_sw_smu(adev))
-		return smu_allow_xgmi_power_down(smu, en);
+	if (is_support_sw_smu(adev)) {
+		mutex_lock(&adev->pm.mutex);
+		ret = smu_allow_xgmi_power_down(smu, en);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
-	return 0;
+	return ret;
 }
 
 int amdgpu_dpm_enable_mgpu_fan_boost(struct amdgpu_device *adev)
@@ -292,8 +327,11 @@ int amdgpu_dpm_enable_mgpu_fan_boost(struct amdgpu_device *adev)
 			adev->powerplay.pp_funcs;
 	int ret = 0;
 
-	if (pp_funcs && pp_funcs->enable_mgpu_fan_boost)
+	if (pp_funcs && pp_funcs->enable_mgpu_fan_boost) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->enable_mgpu_fan_boost(pp_handle);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -306,9 +344,12 @@ int amdgpu_dpm_set_clockgating_by_smu(struct amdgpu_device *adev,
 			adev->powerplay.pp_funcs;
 	int ret = 0;
 
-	if (pp_funcs && pp_funcs->set_clockgating_by_smu)
+	if (pp_funcs && pp_funcs->set_clockgating_by_smu) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->set_clockgating_by_smu(pp_handle,
 						       msg_id);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -321,9 +362,12 @@ int amdgpu_dpm_smu_i2c_bus_access(struct amdgpu_device *adev,
 			adev->powerplay.pp_funcs;
 	int ret = -EOPNOTSUPP;
 
-	if (pp_funcs && pp_funcs->smu_i2c_bus_access)
+	if (pp_funcs && pp_funcs->smu_i2c_bus_access) {
+		mutex_lock(&adev->pm.mutex);
 		ret = pp_funcs->smu_i2c_bus_access(pp_handle,
 						   acquire);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -336,13 +380,15 @@ void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
 			adev->pm.ac_power = true;
 		else
 			adev->pm.ac_power = false;
+
 		if (adev->powerplay.pp_funcs &&
 		    adev->powerplay.pp_funcs->enable_bapm)
 			amdgpu_dpm_enable_bapm(adev, adev->pm.ac_power);
-		mutex_unlock(&adev->pm.mutex);
 
 		if (is_support_sw_smu(adev))
 			smu_set_ac_dc(adev->powerplay.pp_handle);
+
+		mutex_unlock(&adev->pm.mutex);
 	}
 }
 
@@ -350,16 +396,19 @@ int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors senso
 			   void *data, uint32_t *size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-	int ret = 0;
+	int ret = -EINVAL;
 
 	if (!data || !size)
 		return -EINVAL;
 
-	if (pp_funcs && pp_funcs->read_sensor)
-		ret = pp_funcs->read_sensor((adev)->powerplay.pp_handle,
-								    sensor, data, size);
-	else
-		ret = -EINVAL;
+	if (pp_funcs && pp_funcs->read_sensor) {
+		mutex_lock(&adev->pm.mutex);
+		ret = pp_funcs->read_sensor(adev->powerplay.pp_handle,
+					    sensor,
+					    data,
+					    size);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -371,7 +420,9 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 	if (!pp_funcs->pm_compute_clocks)
 		return;
 
+	mutex_lock(&adev->pm.mutex);
 	pp_funcs->pm_compute_clocks(adev->powerplay.pp_handle);
+	mutex_unlock(&adev->pm.mutex);
 }
 
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
@@ -406,25 +457,39 @@ void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
 
 int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version)
 {
-	int r;
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int r = 0;
 
-	if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->load_firmware) {
-		r = adev->powerplay.pp_funcs->load_firmware(adev->powerplay.pp_handle);
-		if (r) {
-			pr_err("smu firmware loading failed\n");
-			return r;
-		}
+	if (!pp_funcs->load_firmware)
+		return 0;
 
-		if (smu_version)
-			*smu_version = adev->pm.fw_version;
+	mutex_lock(&adev->pm.mutex);
+	r = pp_funcs->load_firmware(adev->powerplay.pp_handle);
+	if (r) {
+		pr_err("smu firmware loading failed\n");
+		goto out;
 	}
 
-	return 0;
+	if (smu_version)
+		*smu_version = adev->pm.fw_version;
+
+out:
+	mutex_unlock(&adev->pm.mutex);
+	return r;
 }
 
 int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
 {
-	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
+	int ret = 0;
+
+	if (is_support_sw_smu(adev)) {
+		mutex_lock(&adev->pm.mutex);
+		ret = smu_set_light_sbr(adev->powerplay.pp_handle,
+					enable);
+		mutex_unlock(&adev->pm.mutex);
+	}
+
+	return ret;
 }
 
 int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
@@ -432,8 +497,11 @@ int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
 	struct smu_context *smu = adev->powerplay.pp_handle;
 	int ret = 0;
 
-	if (is_support_sw_smu(adev))
+	if (is_support_sw_smu(adev)) {
+		mutex_lock(&adev->pm.mutex);
 		ret = smu_send_hbm_bad_pages_num(smu, size);
+		mutex_unlock(&adev->pm.mutex);
+	}
 
 	return ret;
 }
@@ -443,15 +511,22 @@ int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
 				  uint32_t *min,
 				  uint32_t *max)
 {
+	int ret = 0;
+
+	if (type != PP_SCLK)
+		return -EINVAL;
+
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	switch (type) {
-	case PP_SCLK:
-		return smu_get_dpm_freq_range(adev->powerplay.pp_handle, SMU_SCLK, min, max);
-	default:
-		return -EINVAL;
-	}
+	mutex_lock(&adev->pm.mutex);
+	ret = smu_get_dpm_freq_range(adev->powerplay.pp_handle,
+				     SMU_SCLK,
+				     min,
+				     max);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
@@ -460,26 +535,37 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
 				   uint32_t max)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = 0;
+
+	if (type != PP_SCLK)
+		return -EINVAL;
 
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	switch (type) {
-	case PP_SCLK:
-		return smu_set_soft_freq_range(smu, SMU_SCLK, min, max);
-	default:
-		return -EINVAL;
-	}
+	mutex_lock(&adev->pm.mutex);
+	ret = smu_set_soft_freq_range(smu,
+				      SMU_SCLK,
+				      min,
+				      max);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = 0;
 
 	if (!is_support_sw_smu(adev))
 		return 0;
 
-	return smu_write_watermarks_table(smu);
+	mutex_lock(&adev->pm.mutex);
+	ret = smu_write_watermarks_table(smu);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
@@ -487,27 +573,40 @@ int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
 			      uint64_t event_arg)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = 0;
 
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	return smu_wait_for_event(smu, event, event_arg);
+	mutex_lock(&adev->pm.mutex);
+	ret = smu_wait_for_event(smu, event, event_arg);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
+	int ret = 0;
 
 	if (!is_support_sw_smu(adev))
 		return -EOPNOTSUPP;
 
-	return smu_get_status_gfxoff(smu, value);
+	mutex_lock(&adev->pm.mutex);
+	ret = smu_get_status_gfxoff(smu, value);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
 
+	if (!is_support_sw_smu(adev))
+		return 0;
+
 	return atomic64_read(&smu->throttle_int_counter);
 }
 
@@ -542,12 +641,17 @@ struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
 						     uint32_t idx)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	struct amd_vce_state *vstate = NULL;
 
 	if (!pp_funcs->get_vce_clock_state)
 		return NULL;
 
-	return pp_funcs->get_vce_clock_state(adev->powerplay.pp_handle,
-					     idx);
+	mutex_lock(&adev->pm.mutex);
+	vstate = pp_funcs->get_vce_clock_state(adev->powerplay.pp_handle,
+					       idx);
+	mutex_unlock(&adev->pm.mutex);
+
+	return vstate;
 }
 
 void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev,
@@ -555,9 +659,11 @@ void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev,
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 
+	mutex_lock(&adev->pm.mutex);
+
 	if (!pp_funcs->get_current_power_state) {
 		*state = adev->pm.dpm.user_state;
-		return;
+		goto out;
 	}
 
 	*state = pp_funcs->get_current_power_state(adev->powerplay.pp_handle);
@@ -565,13 +671,17 @@ void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev,
 	    *state > POWER_STATE_TYPE_INTERNAL_3DPERF)
 		*state = adev->pm.dpm.user_state;
 
+out:
+	mutex_unlock(&adev->pm.mutex);
 	return;
 }
 
 void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
 				enum amd_pm_state_type state)
 {
+	mutex_lock(&adev->pm.mutex);
 	adev->pm.dpm.user_state = state;
+	mutex_unlock(&adev->pm.mutex);
 }
 
 enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
@@ -579,10 +689,12 @@ enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum amd_dpm_forced_level level;
 
+	mutex_lock(&adev->pm.mutex);
 	if (pp_funcs->get_performance_level)
 		level = pp_funcs->get_performance_level(adev->powerplay.pp_handle);
 	else
 		level = adev->pm.dpm.forced_level;
+	mutex_unlock(&adev->pm.mutex);
 
 	return level;
 }
@@ -591,30 +703,46 @@ int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
 				       enum amd_dpm_forced_level level)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
+
+	mutex_lock(&adev->pm.mutex);
 
-	if (pp_funcs->force_performance_level) {
-		if (adev->pm.dpm.thermal_active)
-			return -EINVAL;
+	if (!pp_funcs->force_performance_level)
+		goto out;
 
-		if (pp_funcs->force_performance_level(adev->powerplay.pp_handle,
-						      level))
-			return -EINVAL;
+	if (adev->pm.dpm.thermal_active) {
+		ret = -EINVAL;
+		goto out;
 	}
 
-	adev->pm.dpm.forced_level = level;
+	if (pp_funcs->force_performance_level(adev->powerplay.pp_handle,
+					      level))
+		ret = -EINVAL;
 
-	return 0;
+out:
+	if (!ret)
+		adev->pm.dpm.forced_level = level;
+
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
 				 struct pp_states_info *states)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_pp_num_states)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_pp_num_states(adev->powerplay.pp_handle, states);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_pp_num_states(adev->powerplay.pp_handle,
+					  states);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
@@ -622,21 +750,34 @@ int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
 			      enum amd_pm_state_type *user_state)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->dispatch_tasks)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->dispatch_tasks(adev->powerplay.pp_handle, task_id, user_state);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->dispatch_tasks(adev->powerplay.pp_handle,
+				       task_id,
+				       user_state);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_pp_table)
 		return 0;
 
-	return pp_funcs->get_pp_table(adev->powerplay.pp_handle, table);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_pp_table(adev->powerplay.pp_handle,
+				     table);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
@@ -645,14 +786,19 @@ int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
 				      uint32_t size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_fine_grain_clk_vol)
 		return 0;
 
-	return pp_funcs->set_fine_grain_clk_vol(adev->powerplay.pp_handle,
-						type,
-						input,
-						size);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_fine_grain_clk_vol(adev->powerplay.pp_handle,
+					       type,
+					       input,
+					       size);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
@@ -661,14 +807,19 @@ int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
 				  uint32_t size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->odn_edit_dpm_table)
 		return 0;
 
-	return pp_funcs->odn_edit_dpm_table(adev->powerplay.pp_handle,
-					    type,
-					    input,
-					    size);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->odn_edit_dpm_table(adev->powerplay.pp_handle,
+					   type,
+					   input,
+					   size);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
@@ -676,36 +827,51 @@ int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
 				  char *buf)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->print_clock_levels)
 		return 0;
 
-	return pp_funcs->print_clock_levels(adev->powerplay.pp_handle,
-					    type,
-					    buf);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->print_clock_levels(adev->powerplay.pp_handle,
+					   type,
+					   buf);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
 				    uint64_t ppfeature_masks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_ppfeature_status)
 		return 0;
 
-	return pp_funcs->set_ppfeature_status(adev->powerplay.pp_handle,
-					      ppfeature_masks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_ppfeature_status(adev->powerplay.pp_handle,
+					     ppfeature_masks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_ppfeature_status)
 		return 0;
 
-	return pp_funcs->get_ppfeature_status(adev->powerplay.pp_handle,
-					      buf);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_ppfeature_status(adev->powerplay.pp_handle,
+					     buf);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
@@ -713,88 +879,131 @@ int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
 				 uint32_t mask)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->force_clock_level)
 		return 0;
 
-	return pp_funcs->force_clock_level(adev->powerplay.pp_handle,
-					   type,
-					   mask);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->force_clock_level(adev->powerplay.pp_handle,
+					  type,
+					  mask);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_sclk_od)
 		return 0;
 
-	return pp_funcs->get_sclk_od(adev->powerplay.pp_handle);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_sclk_od(adev->powerplay.pp_handle);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_sclk_od)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_sclk_od(adev->powerplay.pp_handle,
+				    value);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_mclk_od)
 		return 0;
 
-	return pp_funcs->get_mclk_od(adev->powerplay.pp_handle);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_mclk_od(adev->powerplay.pp_handle);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_mclk_od)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_mclk_od(adev->powerplay.pp_handle,
+				    value);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
 				      char *buf)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_power_profile_mode)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_power_profile_mode(adev->powerplay.pp_handle,
-						buf);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_power_profile_mode(adev->powerplay.pp_handle,
+					       buf);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
 				      long *input, uint32_t size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_power_profile_mode)
 		return 0;
 
-	return pp_funcs->set_power_profile_mode(adev->powerplay.pp_handle,
-						input,
-						size);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_power_profile_mode(adev->powerplay.pp_handle,
+					       input,
+					       size);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void **table)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_gpu_metrics)
 		return 0;
 
-	return pp_funcs->get_gpu_metrics(adev->powerplay.pp_handle, table);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_gpu_metrics(adev->powerplay.pp_handle,
+					table);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
@@ -805,7 +1014,9 @@ int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
 	if (!pp_funcs->get_fan_control_mode)
 		return -EOPNOTSUPP;
 
+	mutex_lock(&adev->pm.mutex);
 	*fan_mode = pp_funcs->get_fan_control_mode(adev->powerplay.pp_handle);
+	mutex_unlock(&adev->pm.mutex);
 
 	return 0;
 }
@@ -814,44 +1025,68 @@ int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
 				 uint32_t speed)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_fan_speed_pwm)
 		return -EINVAL;
 
-	return pp_funcs->set_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_fan_speed_pwm(adev->powerplay.pp_handle,
+					  speed);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
 				 uint32_t *speed)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_fan_speed_pwm)
 		return -EINVAL;
 
-	return pp_funcs->get_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_fan_speed_pwm(adev->powerplay.pp_handle,
+					  speed);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
 				 uint32_t *speed)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_fan_speed_rpm)
 		return -EINVAL;
 
-	return pp_funcs->get_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_fan_speed_rpm(adev->powerplay.pp_handle,
+					  speed);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
 				 uint32_t speed)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_fan_speed_rpm)
 		return -EINVAL;
 
-	return pp_funcs->set_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_fan_speed_rpm(adev->powerplay.pp_handle,
+					  speed);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
@@ -862,7 +1097,10 @@ int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
 	if (!pp_funcs->set_fan_control_mode)
 		return -EOPNOTSUPP;
 
-	pp_funcs->set_fan_control_mode(adev->powerplay.pp_handle, mode);
+	mutex_lock(&adev->pm.mutex);
+	pp_funcs->set_fan_control_mode(adev->powerplay.pp_handle,
+				       mode);
+	mutex_unlock(&adev->pm.mutex);
 
 	return 0;
 }
@@ -873,33 +1111,50 @@ int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
 			       enum pp_power_type power_type)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_power_limit)
 		return -ENODATA;
 
-	return pp_funcs->get_power_limit(adev->powerplay.pp_handle,
-					 limit,
-					 pp_limit_level,
-					 power_type);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_power_limit(adev->powerplay.pp_handle,
+					limit,
+					pp_limit_level,
+					power_type);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
 			       uint32_t limit)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_power_limit)
 		return -EINVAL;
 
-	return pp_funcs->set_power_limit(adev->powerplay.pp_handle, limit);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_power_limit(adev->powerplay.pp_handle,
+					limit);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev)
 {
+	bool cclk_dpm_supported = false;
+
 	if (!is_support_sw_smu(adev))
 		return false;
 
-	return is_support_cclk_dpm(adev);
+	mutex_lock(&adev->pm.mutex);
+	cclk_dpm_supported = is_support_cclk_dpm(adev);
+	mutex_unlock(&adev->pm.mutex);
+
+	return (int)cclk_dpm_supported;
 }
 
 int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
@@ -910,8 +1165,10 @@ int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *ade
 	if (!pp_funcs->debugfs_print_current_performance_level)
 		return -EOPNOTSUPP;
 
+	mutex_lock(&adev->pm.mutex);
 	pp_funcs->debugfs_print_current_performance_level(adev->powerplay.pp_handle,
 							  m);
+	mutex_unlock(&adev->pm.mutex);
 
 	return 0;
 }
@@ -921,13 +1178,18 @@ int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
 				       size_t *size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_smu_prv_buf_details)
 		return -ENOSYS;
 
-	return pp_funcs->get_smu_prv_buf_details(adev->powerplay.pp_handle,
-						 addr,
-						 size);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_smu_prv_buf_details(adev->powerplay.pp_handle,
+						addr,
+						size);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
@@ -948,19 +1210,27 @@ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
 			    size_t size)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_pp_table)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_pp_table(adev->powerplay.pp_handle,
-				      buf,
-				      size);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_pp_table(adev->powerplay.pp_handle,
+				     buf,
+				     size);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = adev->powerplay.pp_handle;
 
+	if (!is_support_sw_smu(adev))
+		return INT_MAX;
+
 	return smu->cpu_core_num;
 }
 
@@ -976,12 +1246,17 @@ int amdgpu_dpm_display_configuration_change(struct amdgpu_device *adev,
 					    const struct amd_pp_display_configuration *input)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->display_configuration_change)
 		return 0;
 
-	return pp_funcs->display_configuration_change(adev->powerplay.pp_handle,
-						      input);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->display_configuration_change(adev->powerplay.pp_handle,
+						     input);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_clock_by_type(struct amdgpu_device *adev,
@@ -989,25 +1264,35 @@ int amdgpu_dpm_get_clock_by_type(struct amdgpu_device *adev,
 				 struct amd_pp_clocks *clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_clock_by_type)
 		return 0;
 
-	return pp_funcs->get_clock_by_type(adev->powerplay.pp_handle,
-					   type,
-					   clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_clock_by_type(adev->powerplay.pp_handle,
+					  type,
+					  clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_display_mode_validation_clks(struct amdgpu_device *adev,
 						struct amd_pp_simple_clock_info *clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_display_mode_validation_clocks)
 		return 0;
 
-	return pp_funcs->get_display_mode_validation_clocks(adev->powerplay.pp_handle,
-							    clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_display_mode_validation_clocks(adev->powerplay.pp_handle,
+							   clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_clock_by_type_with_latency(struct amdgpu_device *adev,
@@ -1015,13 +1300,18 @@ int amdgpu_dpm_get_clock_by_type_with_latency(struct amdgpu_device *adev,
 					      struct pp_clock_levels_with_latency *clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_clock_by_type_with_latency)
 		return 0;
 
-	return pp_funcs->get_clock_by_type_with_latency(adev->powerplay.pp_handle,
-							type,
-							clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_clock_by_type_with_latency(adev->powerplay.pp_handle,
+						       type,
+						       clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_clock_by_type_with_voltage(struct amdgpu_device *adev,
@@ -1029,49 +1319,69 @@ int amdgpu_dpm_get_clock_by_type_with_voltage(struct amdgpu_device *adev,
 					      struct pp_clock_levels_with_voltage *clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_clock_by_type_with_voltage)
 		return 0;
 
-	return pp_funcs->get_clock_by_type_with_voltage(adev->powerplay.pp_handle,
-							type,
-							clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_clock_by_type_with_voltage(adev->powerplay.pp_handle,
+						       type,
+						       clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_watermarks_for_clocks_ranges(struct amdgpu_device *adev,
 					       void *clock_ranges)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_watermarks_for_clocks_ranges)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_watermarks_for_clocks_ranges(adev->powerplay.pp_handle,
-							  clock_ranges);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_watermarks_for_clocks_ranges(adev->powerplay.pp_handle,
+							 clock_ranges);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_display_clock_voltage_request(struct amdgpu_device *adev,
 					     struct pp_display_clock_request *clock)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->display_clock_voltage_request)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->display_clock_voltage_request(adev->powerplay.pp_handle,
-						       clock);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->display_clock_voltage_request(adev->powerplay.pp_handle,
+						      clock);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_current_clocks(struct amdgpu_device *adev,
 				  struct amd_pp_clock_info *clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_current_clocks)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_current_clocks(adev->powerplay.pp_handle,
-					    clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_current_clocks(adev->powerplay.pp_handle,
+					   clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 void amdgpu_dpm_notify_smu_enable_pwe(struct amdgpu_device *adev)
@@ -1081,31 +1391,43 @@ void amdgpu_dpm_notify_smu_enable_pwe(struct amdgpu_device *adev)
 	if (!pp_funcs->notify_smu_enable_pwe)
 		return;
 
+	mutex_lock(&adev->pm.mutex);
 	pp_funcs->notify_smu_enable_pwe(adev->powerplay.pp_handle);
+	mutex_unlock(&adev->pm.mutex);
 }
 
 int amdgpu_dpm_set_active_display_count(struct amdgpu_device *adev,
 					uint32_t count)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_active_display_count)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_active_display_count(adev->powerplay.pp_handle,
-						  count);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_active_display_count(adev->powerplay.pp_handle,
+						 count);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_set_min_deep_sleep_dcefclk(struct amdgpu_device *adev,
 					  uint32_t clock)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->set_min_deep_sleep_dcefclk)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->set_min_deep_sleep_dcefclk(adev->powerplay.pp_handle,
-						    clock);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->set_min_deep_sleep_dcefclk(adev->powerplay.pp_handle,
+						   clock);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 void amdgpu_dpm_set_hard_min_dcefclk_by_freq(struct amdgpu_device *adev,
@@ -1116,8 +1438,10 @@ void amdgpu_dpm_set_hard_min_dcefclk_by_freq(struct amdgpu_device *adev,
 	if (!pp_funcs->set_hard_min_dcefclk_by_freq)
 		return;
 
+	mutex_lock(&adev->pm.mutex);
 	pp_funcs->set_hard_min_dcefclk_by_freq(adev->powerplay.pp_handle,
 					       clock);
+	mutex_unlock(&adev->pm.mutex);
 }
 
 void amdgpu_dpm_set_hard_min_fclk_by_freq(struct amdgpu_device *adev,
@@ -1128,32 +1452,44 @@ void amdgpu_dpm_set_hard_min_fclk_by_freq(struct amdgpu_device *adev,
 	if (!pp_funcs->set_hard_min_fclk_by_freq)
 		return;
 
+	mutex_lock(&adev->pm.mutex);
 	pp_funcs->set_hard_min_fclk_by_freq(adev->powerplay.pp_handle,
 					    clock);
+	mutex_unlock(&adev->pm.mutex);
 }
 
 int amdgpu_dpm_display_disable_memory_clock_switch(struct amdgpu_device *adev,
 						   bool disable_memory_clock_switch)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->display_disable_memory_clock_switch)
 		return 0;
 
-	return pp_funcs->display_disable_memory_clock_switch(adev->powerplay.pp_handle,
-							     disable_memory_clock_switch);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->display_disable_memory_clock_switch(adev->powerplay.pp_handle,
+							    disable_memory_clock_switch);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_max_sustainable_clocks_by_dc(struct amdgpu_device *adev,
 						struct pp_smu_nv_clock_table *max_clocks)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_max_sustainable_clocks_by_dc)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_max_sustainable_clocks_by_dc(adev->powerplay.pp_handle,
-							  max_clocks);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_max_sustainable_clocks_by_dc(adev->powerplay.pp_handle,
+							 max_clocks);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 enum pp_smu_status amdgpu_dpm_get_uclk_dpm_states(struct amdgpu_device *adev,
@@ -1161,23 +1497,33 @@ enum pp_smu_status amdgpu_dpm_get_uclk_dpm_states(struct amdgpu_device *adev,
 						  unsigned int *num_states)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_uclk_dpm_states)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_uclk_dpm_states(adev->powerplay.pp_handle,
-					     clock_values_in_khz,
-					     num_states);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_uclk_dpm_states(adev->powerplay.pp_handle,
+					    clock_values_in_khz,
+					    num_states);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
 
 int amdgpu_dpm_get_dpm_clock_table(struct amdgpu_device *adev,
 				   struct dpm_clocks *clock_table)
 {
 	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
 
 	if (!pp_funcs->get_dpm_clock_table)
 		return -EOPNOTSUPP;
 
-	return pp_funcs->get_dpm_clock_table(adev->powerplay.pp_handle,
-					     clock_table);
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->get_dpm_clock_table(adev->powerplay.pp_handle,
+					    clock_table);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
 }
diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
index 72824ef61edd..b37662c4a413 100644
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
@@ -3040,21 +3040,18 @@ static int kv_dpm_sw_init(void *handle)
 		return 0;
 
 	INIT_WORK(&adev->pm.dpm.thermal.work, amdgpu_dpm_thermal_work_handler);
-	mutex_lock(&adev->pm.mutex);
 	ret = kv_dpm_init(adev);
 	if (ret)
 		goto dpm_failed;
 	adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
 	if (amdgpu_dpm == 1)
 		amdgpu_pm_print_power_states(adev);
-	mutex_unlock(&adev->pm.mutex);
 	DRM_INFO("amdgpu: dpm initialized\n");
 
 	return 0;
 
 dpm_failed:
 	kv_dpm_fini(adev);
-	mutex_unlock(&adev->pm.mutex);
 	DRM_ERROR("amdgpu: dpm initialization failed\n");
 	return ret;
 }
@@ -3065,9 +3062,7 @@ static int kv_dpm_sw_fini(void *handle)
 
 	flush_work(&adev->pm.dpm.thermal.work);
 
-	mutex_lock(&adev->pm.mutex);
 	kv_dpm_fini(adev);
-	mutex_unlock(&adev->pm.mutex);
 
 	return 0;
 }
@@ -3080,14 +3075,12 @@ static int kv_dpm_hw_init(void *handle)
 	if (!amdgpu_dpm)
 		return 0;
 
-	mutex_lock(&adev->pm.mutex);
 	kv_dpm_setup_asic(adev);
 	ret = kv_dpm_enable(adev);
 	if (ret)
 		adev->pm.dpm_enabled = false;
 	else
 		adev->pm.dpm_enabled = true;
-	mutex_unlock(&adev->pm.mutex);
 	amdgpu_legacy_dpm_compute_clocks(adev);
 	return ret;
 }
@@ -3096,11 +3089,8 @@ static int kv_dpm_hw_fini(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	if (adev->pm.dpm_enabled) {
-		mutex_lock(&adev->pm.mutex);
+	if (adev->pm.dpm_enabled)
 		kv_dpm_disable(adev);
-		mutex_unlock(&adev->pm.mutex);
-	}
 
 	return 0;
 }
@@ -3110,12 +3100,10 @@ static int kv_dpm_suspend(void *handle)
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 	if (adev->pm.dpm_enabled) {
-		mutex_lock(&adev->pm.mutex);
 		/* disable dpm */
 		kv_dpm_disable(adev);
 		/* reset the power state */
 		adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
-		mutex_unlock(&adev->pm.mutex);
 	}
 	return 0;
 }
@@ -3127,14 +3115,12 @@ static int kv_dpm_resume(void *handle)
 
 	if (adev->pm.dpm_enabled) {
 		/* asic init will reset to the boot state */
-		mutex_lock(&adev->pm.mutex);
 		kv_dpm_setup_asic(adev);
 		ret = kv_dpm_enable(adev);
 		if (ret)
 			adev->pm.dpm_enabled = false;
 		else
 			adev->pm.dpm_enabled = true;
-		mutex_unlock(&adev->pm.mutex);
 		if (adev->pm.dpm_enabled)
 			amdgpu_legacy_dpm_compute_clocks(adev);
 	}
diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
index b8dbddefb74e..51ab78f2aae2 100644
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
@@ -7786,21 +7786,18 @@ static int si_dpm_sw_init(void *handle)
 		return ret;
 
 	INIT_WORK(&adev->pm.dpm.thermal.work, amdgpu_dpm_thermal_work_handler);
-	mutex_lock(&adev->pm.mutex);
 	ret = si_dpm_init(adev);
 	if (ret)
 		goto dpm_failed;
 	adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
 	if (amdgpu_dpm == 1)
 		amdgpu_pm_print_power_states(adev);
-	mutex_unlock(&adev->pm.mutex);
 	DRM_INFO("amdgpu: dpm initialized\n");
 
 	return 0;
 
 dpm_failed:
 	si_dpm_fini(adev);
-	mutex_unlock(&adev->pm.mutex);
 	DRM_ERROR("amdgpu: dpm initialization failed\n");
 	return ret;
 }
@@ -7811,9 +7808,7 @@ static int si_dpm_sw_fini(void *handle)
 
 	flush_work(&adev->pm.dpm.thermal.work);
 
-	mutex_lock(&adev->pm.mutex);
 	si_dpm_fini(adev);
-	mutex_unlock(&adev->pm.mutex);
 
 	return 0;
 }
@@ -7827,14 +7822,12 @@ static int si_dpm_hw_init(void *handle)
 	if (!amdgpu_dpm)
 		return 0;
 
-	mutex_lock(&adev->pm.mutex);
 	si_dpm_setup_asic(adev);
 	ret = si_dpm_enable(adev);
 	if (ret)
 		adev->pm.dpm_enabled = false;
 	else
 		adev->pm.dpm_enabled = true;
-	mutex_unlock(&adev->pm.mutex);
 	amdgpu_legacy_dpm_compute_clocks(adev);
 	return ret;
 }
@@ -7843,11 +7836,8 @@ static int si_dpm_hw_fini(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	if (adev->pm.dpm_enabled) {
-		mutex_lock(&adev->pm.mutex);
+	if (adev->pm.dpm_enabled)
 		si_dpm_disable(adev);
-		mutex_unlock(&adev->pm.mutex);
-	}
 
 	return 0;
 }
@@ -7857,12 +7847,10 @@ static int si_dpm_suspend(void *handle)
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 	if (adev->pm.dpm_enabled) {
-		mutex_lock(&adev->pm.mutex);
 		/* disable dpm */
 		si_dpm_disable(adev);
 		/* reset the power state */
 		adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
-		mutex_unlock(&adev->pm.mutex);
 	}
 	return 0;
 }
@@ -7874,14 +7862,12 @@ static int si_dpm_resume(void *handle)
 
 	if (adev->pm.dpm_enabled) {
 		/* asic init will reset to the boot state */
-		mutex_lock(&adev->pm.mutex);
 		si_dpm_setup_asic(adev);
 		ret = si_dpm_enable(adev);
 		if (ret)
 			adev->pm.dpm_enabled = false;
 		else
 			adev->pm.dpm_enabled = true;
-		mutex_unlock(&adev->pm.mutex);
 		if (adev->pm.dpm_enabled)
 			amdgpu_legacy_dpm_compute_clocks(adev);
 	}
-- 
2.29.0


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-11-30  7:42 ` [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power Evan Quan
@ 2021-11-30  8:09   ` Lazar, Lijo
  2021-12-01  1:59     ` Quan, Evan
  2021-12-01  3:37     ` Lazar, Lijo
  0 siblings, 2 replies; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30  8:09 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> Those implementation details(whether swsmu supported, some ppt_funcs supported,
> accessing internal statistics ...)should be kept internally. It's not a good
> practice and even error prone to expose implementation details.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
> ---
>   drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
>   .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95 +++++++++++++++++++
>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
>   drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
>   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
>   13 files changed, 155 insertions(+), 64 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> index bcfdb63b1d42..a545df4efce1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> @@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct amdgpu_device *adev)
>   	adev->gfx.rlc.funcs->resume(adev);
>   
>   	/* Wait for FW reset event complete */
> -	r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
> +	r = amdgpu_dpm_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
>   	if (r) {
>   		dev_err(adev->dev,
>   			"Failed to get response from firmware after reset\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 164d6a9e9fbb..0d1f00b24aae 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void *data, u64 val)
>   		return ret;
>   	}
>   
> -	if (is_support_sw_smu(adev)) {
> -		ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, &min_freq, &max_freq);
> -		if (ret || val > max_freq || val < min_freq)
> -			return -EINVAL;
> -		ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK, (uint32_t)val, (uint32_t)val);
> -	} else {
> -		return 0;
> +	ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK, &min_freq, &max_freq);
> +	if (ret == -EOPNOTSUPP) {
> +		ret = 0;
> +		goto out;
>   	}
> +	if (ret || val > max_freq || val < min_freq) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK, (uint32_t)val, (uint32_t)val);
> +	if (ret)
> +		ret = -EINVAL;
>   
> +out:
>   	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>   	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>   
> -	if (ret)
> -		return -EINVAL;
> -
> -	return 0;
> +	return ret;
>   }
>   
>   DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 1989f9e9379e..41cc1ffb5809 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct amdgpu_device *adev)
>   	if (adev->asic_type == CHIP_ARCTURUS &&
>   	    amdgpu_passthrough(adev) &&
>   	    adev->gmc.xgmi.num_physical_nodes > 1)
> -		smu_set_light_sbr(&adev->smu, true);
> +		amdgpu_dpm_set_light_sbr(adev, true);
>   
>   	if (adev->gmc.xgmi.num_physical_nodes > 1) {
>   		mutex_lock(&mgpu_info.mutex);
> @@ -2857,7 +2857,7 @@ static int amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
>   	int i, r;
>   
>   	if (adev->in_s0ix)
> -		amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry);
> +		amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D3Entry);
>   
>   	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>   		if (!adev->ip_blocks[i].status.valid)
> @@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
>   		return 0;
>   
>   	if (adev->in_s0ix)
> -		amdgpu_gfx_state_change_set(adev, sGpuChangeState_D0Entry);
> +		amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D0Entry);
>   
>   	/* post card */
>   	if (amdgpu_device_need_post(adev)) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> index 1916ec84dd71..3d8f82dc8c97 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> @@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct amdgpu_device *adev, uint32_t *value)
>   
>   	mutex_lock(&adev->gfx.gfx_off_mutex);
>   
> -	r = smu_get_status_gfxoff(adev, value);
> +	r = amdgpu_dpm_get_status_gfxoff(adev, value);
>   
>   	mutex_unlock(&adev->gfx.gfx_off_mutex);
>   
> @@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev)
>   	}
>   	return amdgpu_num_kcq;
>   }
> -
> -/* amdgpu_gfx_state_change_set - Handle gfx power state change set
> - * @adev: amdgpu_device pointer
> - * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 -sGpuChangeState_D3Entry)
> - *
> - */
> -
> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum gfx_change_state state)
> -{
> -	mutex_lock(&adev->pm.mutex);
> -	if (adev->powerplay.pp_funcs &&
> -	    adev->powerplay.pp_funcs->gfx_state_change_set)
> -		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> -			(adev)->powerplay.pp_handle, state));
> -	mutex_unlock(&adev->pm.mutex);
> -}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> index f851196c83a5..776c886fd94a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> @@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
>   	AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
>   };
>   
> -/* Argument for PPSMC_MSG_GpuChangeState */
> -enum gfx_change_state {
> -	sGpuChangeState_D0Entry = 1,
> -	sGpuChangeState_D3Entry,
> -};
> -
>   #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
>   #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
>   
> @@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct amdgpu_device *adev,
>   uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
>   void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v);
>   int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev);
> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum gfx_change_state state);
>   #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> index 3c623e589b79..35c4aec04a7e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> @@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct amdgpu_device *adev, struct ras_err_d
>   	 * choosing right query method according to
>   	 * whether smu support query error information
>   	 */
> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
>   	if (ret == -EOPNOTSUPP) {
>   		if (adev->umc.ras_funcs &&
>   			adev->umc.ras_funcs->query_ras_error_count)
> @@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct amdgpu_device *adev)
>   		if (ret)
>   			goto free;
>   
> -		if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
> -			adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, con->eeprom_control.ras_num_recs);
> +		amdgpu_dpm_send_hbm_bad_pages_num(adev, con->eeprom_control.ras_num_recs);
>   	}
>   
>   #ifdef CONFIG_X86_MCE_AMD
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> index 6e4bea012ea4..5fed26c8db44 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> @@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
>   	int ret = 0;
>   
>   	kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
>   	if (ret == -EOPNOTSUPP) {
>   		if (adev->umc.ras_funcs &&
>   		    adev->umc.ras_funcs->query_ras_error_count)
> @@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct amdgpu_device *adev,
>   						err_data->err_addr_cnt);
>   			amdgpu_ras_save_bad_pages(adev);
>   
> -			if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
> -				adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, con->eeprom_control.ras_num_recs);
> +			amdgpu_dpm_send_hbm_bad_pages_num(adev, con->eeprom_control.ras_num_recs);
>   		}
>   
>   		amdgpu_ras_reset_gpu(adev);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> index deae12dc777d..329a4c89f1e6 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> @@ -222,7 +222,7 @@ void kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
>   
>   	len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
>   		       KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
> -		       atomic64_read(&dev->adev->smu.throttle_int_counter));
> +		       amdgpu_dpm_get_thermal_throttling_counter(dev->adev));
>   
>   	add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,	fifo_in, len);
>   }
> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> index 5c0867ebcfce..2e295facd086 100644
> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> @@ -26,6 +26,10 @@
>   
>   extern const struct amdgpu_ip_block_version pp_smu_ip_block;
>   
> +enum smu_event_type {
> +	SMU_EVENT_RESET_COMPLETE = 0,
> +};
> +
>   struct amd_vce_state {
>   	/* vce clocks */
>   	u32 evclk;
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index 08362d506534..9b332c8a0079 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_versio
>   
>   	return 0;
>   }
> +
> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
> +{
> +	return smu_set_light_sbr(&adev->smu, enable);
> +}
> +
> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
> +{
> +	int ret = 0;
> +
> +	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
> +		ret = adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, size);
> +
> +	return ret;
> +}
> +
> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
> +				  enum pp_clock_type type,
> +				  uint32_t *min,
> +				  uint32_t *max)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return -EOPNOTSUPP;
> +
> +	switch (type) {
> +	case PP_SCLK:
> +		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, min, max);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> +				   enum pp_clock_type type,
> +				   uint32_t min,
> +				   uint32_t max)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return -EOPNOTSUPP;
> +
> +	switch (type) {
> +	case PP_SCLK:
> +		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK, min, max);
> +	default:
> +		return -EINVAL;
> +	}
> +}
> +
> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
> +			      enum smu_event_type event,
> +			      uint64_t event_arg)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return -EOPNOTSUPP;
> +
> +	return smu_wait_for_event(&adev->smu, event, event_arg);
> +}
> +
> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return -EOPNOTSUPP;
> +
> +	return smu_get_status_gfxoff(&adev->smu, value);
> +}
> +
> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev)
> +{
> +	return atomic64_read(&adev->smu.throttle_int_counter);
> +}
> +
> +/* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
> + * @adev: amdgpu_device pointer
> + * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 -sGpuChangeState_D3Entry)
> + *
> + */
> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
> +				 enum gfx_change_state state)
> +{
> +	mutex_lock(&adev->pm.mutex);
> +	if (adev->powerplay.pp_funcs &&
> +	    adev->powerplay.pp_funcs->gfx_state_change_set)
> +		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> +			(adev)->powerplay.pp_handle, state));
> +	mutex_unlock(&adev->pm.mutex);
> +}
> +
> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> +			    void *umc_ecc)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return -EOPNOTSUPP;
> +

In general, I don't think we need to keep this check everywhere to make 
amdgpu_dpm_* backwards compatible.  The usage is also inconsistent. For 
ex: amdgpu_dpm_get_thermal_throttling_counter doesn't have any 
is_support_sw_smu check whereas amdgpu_dpm_get_ecc_info() has it. There 
is no reason to keep adding is_support_sw_smu() check for every new 
public API. For sure, they are not going to work with powerplay subsystem.

I would rather prefer to leave old things and create amdgpu_smu_* for 
anything which is supported only in smu subsystem. It's easier to read 
from code perspective also - separate the ones which is supported by smu 
component and not supported in older powerplay components.

Only for the common ones that are supported in powerplay and smu, keep 
amdgpu_dpm_*, for others preference would be to keep amdgpu_smu_*.

Thanks,
Lijo

> +	return smu_get_ecc_info(&adev->smu, umc_ecc);
> +}
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> index 16e3f72d31b9..7289d379a9fb 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> @@ -23,6 +23,12 @@
>   #ifndef __AMDGPU_DPM_H__
>   #define __AMDGPU_DPM_H__
>   
> +/* Argument for PPSMC_MSG_GpuChangeState */
> +enum gfx_change_state {
> +	sGpuChangeState_D0Entry = 1,
> +	sGpuChangeState_D3Entry,
> +};
> +
>   enum amdgpu_int_thermal_type {
>   	THERMAL_TYPE_NONE,
>   	THERMAL_TYPE_EXTERNAL,
> @@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
>   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
>   void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>   int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version);
> -
> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable);
> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size);
> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
> +				       enum pp_clock_type type,
> +				       uint32_t *min,
> +				       uint32_t *max);
> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> +				        enum pp_clock_type type,
> +				        uint32_t min,
> +				        uint32_t max);
> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
> +		       uint64_t event_arg);
> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev);
> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
> +				 enum gfx_change_state state);
> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> +			    void *umc_ecc);
>   #endif
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> index f738f7dc20c9..29791bb21fba 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> @@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
>   	uint32_t clk_dependency;
>   };
>   
> -enum smu_event_type {
> -
> -	SMU_EVENT_RESET_COMPLETE = 0,
> -};
> -
>   #define SMU_TABLE_INIT(tables, table_id, s, a, d)	\
>   	do {						\
>   		tables[table_id].size = s;		\
> @@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context *smu);
>   
>   int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
>   
> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value);
>   
>   int smu_set_light_sbr(struct smu_context *smu, bool enable);
>   
> -int smu_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
> +int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
>   		       uint64_t event_arg);
>   int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
>   int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t size);
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> index 5839918cb574..ef7d0e377965 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> @@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void *handle,
>   	return ret;
>   }
>   
> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
>   {
> -	int ret = 0;
> -	struct smu_context *smu = &adev->smu;
> +	if (!smu->ppt_funcs->get_gfx_off_status)
> +		return -EINVAL;
>   
> -	if (is_support_sw_smu(adev) && smu->ppt_funcs->get_gfx_off_status)
> -		*value = smu_get_gfx_off_status(smu);
> -	else
> -		ret = -EINVAL;
> +	*value = smu_get_gfx_off_status(smu);
>   
> -	return ret;
> +	return 0;
>   }
>   
>   int smu_set_soft_freq_range(struct smu_context *smu,
> @@ -3167,11 +3164,10 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
>   	.get_smu_prv_buf_details = smu_get_prv_buffer_details,
>   };
>   
> -int smu_wait_for_event(struct amdgpu_device *adev, enum smu_event_type event,
> +int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
>   		       uint64_t event_arg)
>   {
>   	int ret = -EINVAL;
> -	struct smu_context *smu = &adev->smu;
>   
>   	if (smu->ppt_funcs->wait_for_event) {
>   		mutex_lock(&smu->mutex);
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 00/17] Unified entry point for other blocks to interact with power
  2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
                   ` (16 preceding siblings ...)
  2021-11-30  7:42 ` [PATCH V2 17/17] drm/amd/pm: unified lock protections in amdgpu_dpm.c Evan Quan
@ 2021-11-30  9:58 ` Christian König
  17 siblings, 0 replies; 44+ messages in thread
From: Christian König @ 2021-11-30  9:58 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, lijo.lazar, Kenneth.Feng

Am 30.11.21 um 08:42 schrieb Evan Quan:
> There are several problems with current power implementations:
> 1. Too many internal details are exposed to other blocks. Thus to interact with
>     power, they need to know which power framework is used(powerplay vs swsmu)
>     or even whether some API is implemented.
> 2. A lot of cross callings exist which make it hard to get a whole picture of
>     the code hierarchy. And that makes any code change/increment error-prone.
> 3. Many different types of lock are used. It is calculated there is totally
>     13 different locks are used within power. Some of them are even designed for
>     the same purpose.
>
> To ease the problems above, this patch series try to
> 1. provide unified entry point for other blocks to interact with power.
> 2. relocate some source code piece/headers to avoid cross callings.
> 3. enforce a unified lock protection on those entry point APIs above.
>     That makes the future optimization for unnecessary power locks possible.

I only skimmed over it, but that looks really good on first glance.

But you need to have Alex take a look as well since I only have a very 
high level understanding of power management.

Regards,
Christian.

>
> Evan Quan (17):
>    drm/amd/pm: do not expose implementation details to other blocks out
>      of power
>    drm/amd/pm: do not expose power implementation details to amdgpu_pm.c
>    drm/amd/pm: do not expose power implementation details to display
>    drm/amd/pm: do not expose those APIs used internally only in
>      amdgpu_dpm.c
>    drm/amd/pm: do not expose those APIs used internally only in si_dpm.c
>    drm/amd/pm: do not expose the API used internally only in kv_dpm.c
>    drm/amd/pm: create a new holder for those APIs used only by legacy
>      ASICs(si/kv)
>    drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure
>    drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations
>    drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c
>    drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task()
>    drm/amd/pm: drop redundant or unused APIs and data structures
>    drm/amd/pm: do not expose the smu_context structure used internally in
>      power
>    drm/amd/pm: relocate the power related headers
>    drm/amd/pm: drop unnecessary gfxoff controls
>    drm/amd/pm: revise the performance level setting APIs
>    drm/amd/pm: unified lock protections in amdgpu_dpm.c
>
>   drivers/gpu/drm/amd/amdgpu/aldebaran.c        |    2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |    7 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |   25 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |    6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |   18 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |    7 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |    5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |    5 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |    2 +-
>   .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |    6 +-
>   .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  |  246 +-
>   .../gpu/drm/amd/include/kgd_pp_interface.h    |   14 +
>   drivers/gpu/drm/amd/pm/Makefile               |   12 +-
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 2435 ++++++++---------
>   drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c  |   94 +
>   drivers/gpu/drm/amd/pm/amdgpu_pm.c            |  568 ++--
>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |  339 +--
>   .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h  |   32 +
>   drivers/gpu/drm/amd/pm/legacy-dpm/Makefile    |   32 +
>   .../pm/{powerplay => legacy-dpm}/cik_dpm.h    |    0
>   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.c |   47 +-
>   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.h |    0
>   .../amd/pm/{powerplay => legacy-dpm}/kv_smc.c |    0
>   .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c    | 1510 ++++++++++
>   .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h    |   71 +
>   .../amd/pm/{powerplay => legacy-dpm}/ppsmc.h  |    0
>   .../pm/{powerplay => legacy-dpm}/r600_dpm.h   |    0
>   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.c |  111 +-
>   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.h |    7 +
>   .../amd/pm/{powerplay => legacy-dpm}/si_smc.c |    0
>   .../{powerplay => legacy-dpm}/sislands_smc.h  |    0
>   drivers/gpu/drm/amd/pm/powerplay/Makefile     |    4 -
>   .../gpu/drm/amd/pm/powerplay/amd_powerplay.c  |   51 +-
>   .../drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c   |   10 +-
>   .../pm/{ => powerplay}/inc/amd_powerplay.h    |    0
>   .../drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h |    0
>   .../amd/pm/{ => powerplay}/inc/fiji_ppsmc.h   |    0
>   .../pm/{ => powerplay}/inc/hardwaremanager.h  |    0
>   .../drm/amd/pm/{ => powerplay}/inc/hwmgr.h    |    3 -
>   .../{ => powerplay}/inc/polaris10_pwrvirus.h  |    0
>   .../amd/pm/{ => powerplay}/inc/power_state.h  |    0
>   .../drm/amd/pm/{ => powerplay}/inc/pp_debug.h |    0
>   .../amd/pm/{ => powerplay}/inc/pp_endian.h    |    0
>   .../amd/pm/{ => powerplay}/inc/pp_thermal.h   |    0
>   .../amd/pm/{ => powerplay}/inc/ppinterrupt.h  |    0
>   .../drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu10.h    |    0
>   .../pm/{ => powerplay}/inc/smu10_driver_if.h  |    0
>   .../pm/{ => powerplay}/inc/smu11_driver_if.h  |    0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu71.h    |    0
>   .../pm/{ => powerplay}/inc/smu71_discrete.h   |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu72.h    |    0
>   .../pm/{ => powerplay}/inc/smu72_discrete.h   |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu73.h    |    0
>   .../pm/{ => powerplay}/inc/smu73_discrete.h   |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu74.h    |    0
>   .../pm/{ => powerplay}/inc/smu74_discrete.h   |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smu75.h    |    0
>   .../pm/{ => powerplay}/inc/smu75_discrete.h   |    0
>   .../amd/pm/{ => powerplay}/inc/smu7_common.h  |    0
>   .../pm/{ => powerplay}/inc/smu7_discrete.h    |    0
>   .../amd/pm/{ => powerplay}/inc/smu7_fusion.h  |    0
>   .../amd/pm/{ => powerplay}/inc/smu7_ppsmc.h   |    0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h |    0
>   .../amd/pm/{ => powerplay}/inc/smu8_fusion.h  |    0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h |    0
>   .../pm/{ => powerplay}/inc/smu9_driver_if.h   |    0
>   .../{ => powerplay}/inc/smu_ucode_xfer_cz.h   |    0
>   .../{ => powerplay}/inc/smu_ucode_xfer_vi.h   |    0
>   .../drm/amd/pm/{ => powerplay}/inc/smumgr.h   |    0
>   .../amd/pm/{ => powerplay}/inc/tonga_ppsmc.h  |    0
>   .../amd/pm/{ => powerplay}/inc/vega10_ppsmc.h |    0
>   .../inc/vega12/smu9_driver_if.h               |    0
>   .../amd/pm/{ => powerplay}/inc/vega12_ppsmc.h |    0
>   .../amd/pm/{ => powerplay}/inc/vega20_ppsmc.h |    0
>   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |   95 +-
>   .../amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h  |    0
>   .../drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h   |   20 +-
>   .../amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h   |    0
>   .../inc/smu11_driver_if_arcturus.h            |    0
>   .../inc/smu11_driver_if_cyan_skillfish.h      |    0
>   .../{ => swsmu}/inc/smu11_driver_if_navi10.h  |    0
>   .../inc/smu11_driver_if_sienna_cichlid.h      |    0
>   .../{ => swsmu}/inc/smu11_driver_if_vangogh.h |    0
>   .../amd/pm/{ => swsmu}/inc/smu12_driver_if.h  |    0
>   .../inc/smu13_driver_if_aldebaran.h           |    0
>   .../inc/smu13_driver_if_yellow_carp.h         |    0
>   .../pm/{ => swsmu}/inc/smu_11_0_cdr_table.h   |    0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_types.h    |    0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h    |    0
>   .../pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h    |    0
>   .../pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h  |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h  |    0
>   .../pm/{ => swsmu}/inc/smu_v11_0_pptable.h    |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h   |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h  |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h   |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h  |    0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h    |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h  |    0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h    |    0
>   .../amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h |    0
>   .../pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h    |    0
>   .../pm/{ => swsmu}/inc/smu_v13_0_pptable.h    |    0
>   .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |   10 +-
>   .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |    9 +-
>   .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |   34 +-
>   .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |   11 +-
>   .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |   10 +-
>   .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |   15 +-
>   drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h        |    4 +
>   114 files changed, 3657 insertions(+), 2671 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c
>   create mode 100644 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h
>   create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/cik_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.c (99%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_smc.c (100%)
>   create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
>   create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/r600_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.c (99%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.h (99%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_smc.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/sislands_smc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/amd_powerplay.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/fiji_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hardwaremanager.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hwmgr.h (99%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/polaris10_pwrvirus.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/power_state.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_debug.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_endian.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_thermal.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/ppinterrupt.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu11_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_common.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_fusion.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8_fusion.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_cz.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_vi.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smumgr.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/tonga_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega10_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12/smu9_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega20_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h (98%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_arcturus.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_cyan_skillfish.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_navi10.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_sienna_cichlid.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_vangogh.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu12_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_aldebaran.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_yellow_carp.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_11_0_cdr_table.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_types.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_pptable.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_pptable.h (100%)
>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c
  2021-11-30  7:42 ` [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c Evan Quan
@ 2021-11-30 12:22   ` Lazar, Lijo
  2021-12-01  2:07     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 12:22 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> Move them to si_dpm.c instead.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I288205cfd7c6ba09cfb22626ff70360d61ff0c67
> --
> v1->v2:
>    - rename the API with "si_" prefix(Alex)
> ---
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 25 -----------
>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 25 -----------
>   drivers/gpu/drm/amd/pm/powerplay/si_dpm.c | 54 +++++++++++++++++++----
>   drivers/gpu/drm/amd/pm/powerplay/si_dpm.h |  7 +++
>   4 files changed, 53 insertions(+), 58 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index 52ac3c883a6e..fbfc07a83122 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -894,31 +894,6 @@ void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
>   	}
>   }
>   
> -enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
> -						 u32 sys_mask,
> -						 enum amdgpu_pcie_gen asic_gen,
> -						 enum amdgpu_pcie_gen default_gen)
> -{
> -	switch (asic_gen) {
> -	case AMDGPU_PCIE_GEN1:
> -		return AMDGPU_PCIE_GEN1;
> -	case AMDGPU_PCIE_GEN2:
> -		return AMDGPU_PCIE_GEN2;
> -	case AMDGPU_PCIE_GEN3:
> -		return AMDGPU_PCIE_GEN3;
> -	default:
> -		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) &&
> -		    (default_gen == AMDGPU_PCIE_GEN3))
> -			return AMDGPU_PCIE_GEN3;
> -		else if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
> -			 (default_gen == AMDGPU_PCIE_GEN2))
> -			return AMDGPU_PCIE_GEN2;
> -		else
> -			return AMDGPU_PCIE_GEN1;
> -	}
> -	return AMDGPU_PCIE_GEN1;
> -}
> -
>   struct amd_vce_state*
>   amdgpu_get_vce_clock_state(void *handle, u32 idx)
>   {
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> index 6681b878e75f..f43b96dfe9d8 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> @@ -45,19 +45,6 @@ enum amdgpu_int_thermal_type {
>   	THERMAL_TYPE_KV,
>   };
>   
> -enum amdgpu_dpm_auto_throttle_src {
> -	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
> -	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
> -};
> -
> -enum amdgpu_dpm_event_src {
> -	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
> -	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
> -	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
> -	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
> -	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
> -};
> -
>   struct amdgpu_ps {
>   	u32 caps; /* vbios flags */
>   	u32 class; /* vbios flags */
> @@ -252,13 +239,6 @@ struct amdgpu_dpm_fan {
>   	bool ucode_fan_control;
>   };
>   
> -enum amdgpu_pcie_gen {
> -	AMDGPU_PCIE_GEN1 = 0,
> -	AMDGPU_PCIE_GEN2 = 1,
> -	AMDGPU_PCIE_GEN3 = 2,
> -	AMDGPU_PCIE_GEN_INVALID = 0xffff
> -};
> -
>   #define amdgpu_dpm_reset_power_profile_state(adev, request) \
>   		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
>   			(adev)->powerplay.pp_handle, request))
> @@ -403,11 +383,6 @@ void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
>   
>   void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
>   
> -enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct amdgpu_device *adev,
> -						 u32 sys_mask,
> -						 enum amdgpu_pcie_gen asic_gen,
> -						 enum amdgpu_pcie_gen default_gen);
> -
>   struct amd_vce_state*
>   amdgpu_get_vce_clock_state(void *handle, u32 idx);
>   
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> index 81f82aa05ec2..4f84d8b893f1 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> @@ -96,6 +96,19 @@ union pplib_clock_info {
>   	struct _ATOM_PPLIB_SI_CLOCK_INFO si;
>   };
>   
> +enum amdgpu_dpm_auto_throttle_src {
> +	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
> +	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
> +};
> +
> +enum amdgpu_dpm_event_src {
> +	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
> +	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
> +	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
> +	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
> +	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
> +};
> +

Better to rename the enums also including amdgpu_pcie_gen if they are 
used only within si_dpm.

Thanks,
Lijo

>   static const u32 r600_utc[R600_PM_NUMBER_OF_TC] =
>   {
>   	R600_UTC_DFLT_00,
> @@ -4927,6 +4940,31 @@ static int si_populate_smc_initial_state(struct amdgpu_device *adev,
>   	return 0;
>   }
>   
> +static enum amdgpu_pcie_gen si_gen_pcie_gen_support(struct amdgpu_device *adev,
> +						    u32 sys_mask,
> +						    enum amdgpu_pcie_gen asic_gen,
> +						    enum amdgpu_pcie_gen default_gen)
> +{
> +	switch (asic_gen) {
> +	case AMDGPU_PCIE_GEN1:
> +		return AMDGPU_PCIE_GEN1;
> +	case AMDGPU_PCIE_GEN2:
> +		return AMDGPU_PCIE_GEN2;
> +	case AMDGPU_PCIE_GEN3:
> +		return AMDGPU_PCIE_GEN3;
> +	default:
> +		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) &&
> +		    (default_gen == AMDGPU_PCIE_GEN3))
> +			return AMDGPU_PCIE_GEN3;
> +		else if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
> +			 (default_gen == AMDGPU_PCIE_GEN2))
> +			return AMDGPU_PCIE_GEN2;
> +		else
> +			return AMDGPU_PCIE_GEN1;
> +	}
> +	return AMDGPU_PCIE_GEN1;
> +}
> +
>   static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
>   				      SISLANDS_SMC_STATETABLE *table)
>   {
> @@ -4989,10 +5027,10 @@ static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
>   							      &table->ACPIState.level.std_vddc);
>   		}
>   		table->ACPIState.level.gen2PCIE =
> -			(u8)amdgpu_get_pcie_gen_support(adev,
> -							si_pi->sys_pcie_mask,
> -							si_pi->boot_pcie_gen,
> -							AMDGPU_PCIE_GEN1);
> +			(u8)si_gen_pcie_gen_support(adev,
> +						    si_pi->sys_pcie_mask,
> +						    si_pi->boot_pcie_gen,
> +						    AMDGPU_PCIE_GEN1);
>   
>   		if (si_pi->vddc_phase_shed_control)
>   			si_populate_phase_shedding_value(adev,
> @@ -7148,10 +7186,10 @@ static void si_parse_pplib_clock_info(struct amdgpu_device *adev,
>   	pl->vddc = le16_to_cpu(clock_info->si.usVDDC);
>   	pl->vddci = le16_to_cpu(clock_info->si.usVDDCI);
>   	pl->flags = le32_to_cpu(clock_info->si.ulFlags);
> -	pl->pcie_gen = amdgpu_get_pcie_gen_support(adev,
> -						   si_pi->sys_pcie_mask,
> -						   si_pi->boot_pcie_gen,
> -						   clock_info->si.ucPCIEGen);
> +	pl->pcie_gen = si_gen_pcie_gen_support(adev,
> +					       si_pi->sys_pcie_mask,
> +					       si_pi->boot_pcie_gen,
> +					       clock_info->si.ucPCIEGen);
>   
>   	/* patch up vddc if necessary */
>   	ret = si_get_leakage_voltage_from_leakage_index(adev, pl->vddc,
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> index bc0be6818e21..8c267682eeef 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> @@ -595,6 +595,13 @@ struct rv7xx_power_info {
>   	RV770_SMC_STATETABLE smc_statetable;
>   };
>   
> +enum amdgpu_pcie_gen {
> +	AMDGPU_PCIE_GEN1 = 0,
> +	AMDGPU_PCIE_GEN2 = 1,
> +	AMDGPU_PCIE_GEN3 = 2,
> +	AMDGPU_PCIE_GEN_INVALID = 0xffff
> +};
> +
>   struct rv7xx_pl {
>   	u32 sclk;
>   	u32 mclk;
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c
  2021-11-30  7:42 ` [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c Evan Quan
@ 2021-11-30 12:27   ` Lazar, Lijo
  2021-12-01  2:47     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 12:27 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> Move it to kv_dpm.c instead.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I554332b386491a79b7913f72786f1e2cb1f8165b
> --
> v1->v2:
>    - rename the API with "kv_" prefix(Alex)
> ---
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 23 ---------------------
>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   |  2 --
>   drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c | 25 ++++++++++++++++++++++-
>   3 files changed, 24 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index fbfc07a83122..ecaf0081bc31 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -209,29 +209,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
>   	return vrefresh;
>   }
>   
> -bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor)
> -{
> -	switch (sensor) {
> -	case THERMAL_TYPE_RV6XX:
> -	case THERMAL_TYPE_RV770:
> -	case THERMAL_TYPE_EVERGREEN:
> -	case THERMAL_TYPE_SUMO:
> -	case THERMAL_TYPE_NI:
> -	case THERMAL_TYPE_SI:
> -	case THERMAL_TYPE_CI:
> -	case THERMAL_TYPE_KV:
> -		return true;
> -	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
> -	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
> -		return false; /* need special handling */
> -	case THERMAL_TYPE_NONE:
> -	case THERMAL_TYPE_EXTERNAL:
> -	case THERMAL_TYPE_EXTERNAL_GPIO:
> -	default:
> -		return false;
> -	}
> -}
> -
>   union power_info {
>   	struct _ATOM_POWERPLAY_INFO info;
>   	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> index f43b96dfe9d8..01120b302590 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> @@ -374,8 +374,6 @@ u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
>   int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
>   			   void *data, uint32_t *size);
>   
> -bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor);
> -
>   int amdgpu_get_platform_caps(struct amdgpu_device *adev);
>   
>   int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> index bcae42cef374..380a5336c74f 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> @@ -1256,6 +1256,29 @@ static void kv_dpm_enable_bapm(void *handle, bool enable)
>   	}
>   }
>   
> +static bool kv_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor)
> +{
> +	switch (sensor) {
> +	case THERMAL_TYPE_RV6XX:
> +	case THERMAL_TYPE_RV770:
> +	case THERMAL_TYPE_EVERGREEN:
> +	case THERMAL_TYPE_SUMO:
> +	case THERMAL_TYPE_NI:
> +	case THERMAL_TYPE_SI:
> +	case THERMAL_TYPE_CI:
> +	case THERMAL_TYPE_KV:
> +		return true;
> +	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
> +	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
> +		return false; /* need special handling */
> +	case THERMAL_TYPE_NONE:
> +	case THERMAL_TYPE_EXTERNAL:
> +	case THERMAL_TYPE_EXTERNAL_GPIO:
> +	default:
> +		return false;
> +	}
> +}

All these names don't look like KV specific. Remove the family specifc 
ones like RV, SI, NI, CI etc., and keep KV and the generic ones like 
GPIO/EXTERNAL/NONE. Don't see a chance of external diodes being used for KV.

Thanks,
Lijo

> +
>   static int kv_dpm_enable(struct amdgpu_device *adev)
>   {
>   	struct kv_power_info *pi = kv_get_pi(adev);
> @@ -1352,7 +1375,7 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
>   	}
>   
>   	if (adev->irq.installed &&
> -	    amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
> +	    kv_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
>   		ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
>   		if (ret) {
>   			DRM_ERROR("kv_set_thermal_temperature_range failed\n");
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c
  2021-11-30  7:42 ` [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c Evan Quan
@ 2021-11-30 13:04   ` Chen, Guchun
  2021-12-01  2:06     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Chen, Guchun @ 2021-11-30 13:04 UTC (permalink / raw)
  To: Quan, Evan, amd-gfx
  Cc: Deucher, Alexander, Lazar, Lijo, Feng, Kenneth, Koenig,
	 Christian, Quan, Evan

[Public]

Two nit-picks.

1. It's better to drop "return" in amdgpu_dpm_get_current_power_state.

2. In some functions, when function pointer is NULL, sometimes it returns 0, while in other cases, it returns -EOPNOTSUPP. Is there any cause for this?

Regards,
Guchun

-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Evan Quan
Sent: Tuesday, November 30, 2021 3:43 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Lazar, Lijo <Lijo.Lazar@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Quan, Evan <Evan.Quan@amd.com>
Subject: [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c

amdgpu_pm.c holds all the user sysfs/hwmon interfaces. It's another
client of our power APIs. It's not proper to spike into power
implementation details there.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Change-Id: I397853ddb13eacfce841366de2a623535422df9a
---
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 458 ++++++++++++++++++-
 drivers/gpu/drm/amd/pm/amdgpu_pm.c        | 519 ++++++++--------------
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 160 +++----
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c |   3 -
 4 files changed, 709 insertions(+), 431 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 9b332c8a0079..3c59f16c7a6f 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -1453,7 +1453,9 @@ static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 	if (equal)
 		return;
 
-	amdgpu_dpm_set_power_state(adev);
+	if (adev->powerplay.pp_funcs->set_power_state)
+		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
+
 	amdgpu_dpm_post_set_power_state(adev);
 
 	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
@@ -1709,3 +1711,457 @@ int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
 
 	return smu_get_ecc_info(&adev->smu, umc_ecc);
 }
+
+struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
+						     uint32_t idx)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_vce_clock_state)
+		return NULL;
+
+	return pp_funcs->get_vce_clock_state(adev->powerplay.pp_handle,
+					     idx);
+}
+
+void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev,
+					enum amd_pm_state_type *state)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_current_power_state) {
+		*state = adev->pm.dpm.user_state;
+		return;
+	}
+
+	*state = pp_funcs->get_current_power_state(adev->powerplay.pp_handle);
+	if (*state < POWER_STATE_TYPE_DEFAULT ||
+	    *state > POWER_STATE_TYPE_INTERNAL_3DPERF)
+		*state = adev->pm.dpm.user_state;
+
+	return;
+}
+
+void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
+				enum amd_pm_state_type state)
+{
+	adev->pm.dpm.user_state = state;
+
+	if (adev->powerplay.pp_funcs->dispatch_tasks)
+		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
+	else
+		amdgpu_pm_compute_clocks(adev);
+}
+
+enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	enum amd_dpm_forced_level level;
+
+	if (pp_funcs->get_performance_level)
+		level = pp_funcs->get_performance_level(adev->powerplay.pp_handle);
+	else
+		level = adev->pm.dpm.forced_level;
+
+	return level;
+}
+
+int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
+				       enum amd_dpm_forced_level level)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (pp_funcs->force_performance_level) {
+		if (adev->pm.dpm.thermal_active)
+			return -EINVAL;
+
+		if (pp_funcs->force_performance_level(adev->powerplay.pp_handle,
+						      level))
+			return -EINVAL;
+	}
+
+	adev->pm.dpm.forced_level = level;
+
+	return 0;
+}
+
+int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
+				 struct pp_states_info *states)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_pp_num_states)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_pp_num_states(adev->powerplay.pp_handle, states);
+}
+
+int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
+			      enum amd_pp_task task_id,
+			      enum amd_pm_state_type *user_state)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->dispatch_tasks)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->dispatch_tasks(adev->powerplay.pp_handle, task_id, user_state);
+}
+
+int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_pp_table)
+		return 0;
+
+	return pp_funcs->get_pp_table(adev->powerplay.pp_handle, table);
+}
+
+int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
+				      uint32_t type,
+				      long *input,
+				      uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fine_grain_clk_vol)
+		return 0;
+
+	return pp_funcs->set_fine_grain_clk_vol(adev->powerplay.pp_handle,
+						type,
+						input,
+						size);
+}
+
+int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
+				  uint32_t type,
+				  long *input,
+				  uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->odn_edit_dpm_table)
+		return 0;
+
+	return pp_funcs->odn_edit_dpm_table(adev->powerplay.pp_handle,
+					    type,
+					    input,
+					    size);
+}
+
+int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->print_clock_levels)
+		return 0;
+
+	return pp_funcs->print_clock_levels(adev->powerplay.pp_handle,
+					    type,
+					    buf);
+}
+
+int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
+				    uint64_t ppfeature_masks)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_ppfeature_status)
+		return 0;
+
+	return pp_funcs->set_ppfeature_status(adev->powerplay.pp_handle,
+					      ppfeature_masks);
+}
+
+int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_ppfeature_status)
+		return 0;
+
+	return pp_funcs->get_ppfeature_status(adev->powerplay.pp_handle,
+					      buf);
+}
+
+int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
+				 enum pp_clock_type type,
+				 uint32_t mask)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->force_clock_level)
+		return 0;
+
+	return pp_funcs->force_clock_level(adev->powerplay.pp_handle,
+					   type,
+					   mask);
+}
+
+int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_sclk_od)
+		return 0;
+
+	return pp_funcs->get_sclk_od(adev->powerplay.pp_handle);
+}
+
+int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_sclk_od)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
+
+	if (amdgpu_dpm_dispatch_task(adev,
+				     AMD_PP_TASK_READJUST_POWER_STATE,
+				     NULL) == -EOPNOTSUPP) {
+		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
+		amdgpu_pm_compute_clocks(adev);
+	}
+
+	return 0;
+}
+
+int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_mclk_od)
+		return 0;
+
+	return pp_funcs->get_mclk_od(adev->powerplay.pp_handle);
+}
+
+int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_mclk_od)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
+
+	if (amdgpu_dpm_dispatch_task(adev,
+				     AMD_PP_TASK_READJUST_POWER_STATE,
+				     NULL) == -EOPNOTSUPP) {
+		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
+		amdgpu_pm_compute_clocks(adev);
+	}
+
+	return 0;
+}
+
+int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
+				      char *buf)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_power_profile_mode)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->get_power_profile_mode(adev->powerplay.pp_handle,
+						buf);
+}
+
+int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
+				      long *input, uint32_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_power_profile_mode)
+		return 0;
+
+	return pp_funcs->set_power_profile_mode(adev->powerplay.pp_handle,
+						input,
+						size);
+}
+
+int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void **table)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_gpu_metrics)
+		return 0;
+
+	return pp_funcs->get_gpu_metrics(adev->powerplay.pp_handle, table);
+}
+
+int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t *fan_mode)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_control_mode)
+		return -EOPNOTSUPP;
+
+	*fan_mode = pp_funcs->get_fan_control_mode(adev->powerplay.pp_handle);
+
+	return 0;
+}
+
+int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_speed_pwm)
+		return -EINVAL;
+
+	return pp_funcs->set_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t *speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_speed_pwm)
+		return -EINVAL;
+
+	return pp_funcs->get_fan_speed_pwm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t *speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_fan_speed_rpm)
+		return -EINVAL;
+
+	return pp_funcs->get_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t speed)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_speed_rpm)
+		return -EINVAL;
+
+	return pp_funcs->set_fan_speed_rpm(adev->powerplay.pp_handle, speed);
+}
+
+int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t mode)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_fan_control_mode)
+		return -EOPNOTSUPP;
+
+	pp_funcs->set_fan_control_mode(adev->powerplay.pp_handle, mode);
+
+	return 0;
+}
+
+int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
+			       uint32_t *limit,
+			       enum pp_power_limit_level pp_limit_level,
+			       enum pp_power_type power_type)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_power_limit)
+		return -ENODATA;
+
+	return pp_funcs->get_power_limit(adev->powerplay.pp_handle,
+					 limit,
+					 pp_limit_level,
+					 power_type);
+}
+
+int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
+			       uint32_t limit)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_power_limit)
+		return -EINVAL;
+
+	return pp_funcs->set_power_limit(adev->powerplay.pp_handle, limit);
+}
+
+int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev)
+{
+	if (!is_support_sw_smu(adev))
+		return false;
+
+	return is_support_cclk_dpm(adev);
+}
+
+int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
+						       struct seq_file *m)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->debugfs_print_current_performance_level)
+		return -EOPNOTSUPP;
+
+	pp_funcs->debugfs_print_current_performance_level(adev->powerplay.pp_handle,
+							  m);
+
+	return 0;
+}
+
+int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+				       void **addr,
+				       size_t *size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->get_smu_prv_buf_details)
+		return -ENOSYS;
+
+	return pp_funcs->get_smu_prv_buf_details(adev->powerplay.pp_handle,
+						 addr,
+						 size);
+}
+
+int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
+{
+	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
+
+	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
+	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
+		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
+		return true;
+
+	return false;
+}
+
+int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+			    const char *buf,
+			    size_t size)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+
+	if (!pp_funcs->set_pp_table)
+		return -EOPNOTSUPP;
+
+	return pp_funcs->set_pp_table(adev->powerplay.pp_handle,
+				      buf,
+				      size);
+}
+
+int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
+{
+	return adev->smu.cpu_core_num;
+}
+
+void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
+{
+	if (!is_support_sw_smu(adev))
+		return;
+
+	amdgpu_smu_stb_debug_fs_init(adev);
+}
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index 082539c70fd4..3382d30b5d90 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -34,7 +34,6 @@
 #include <linux/nospec.h>
 #include <linux/pm_runtime.h>
 #include <asm/processor.h>
-#include "hwmgr.h"
 
 static const struct cg_flag_name clocks[] = {
 	{AMD_CG_SUPPORT_GFX_FGCG, "Graphics Fine Grain Clock Gating"},
@@ -132,7 +131,6 @@ static ssize_t amdgpu_get_power_dpm_state(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum amd_pm_state_type pm;
 	int ret;
 
@@ -147,11 +145,7 @@ static ssize_t amdgpu_get_power_dpm_state(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_current_power_state) {
-		pm = amdgpu_dpm_get_current_power_state(adev);
-	} else {
-		pm = adev->pm.dpm.user_state;
-	}
+	amdgpu_dpm_get_current_power_state(adev, &pm);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -191,19 +185,8 @@ static ssize_t amdgpu_set_power_dpm_state(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		mutex_lock(&adev->pm.mutex);
-		adev->pm.dpm.user_state = state;
-		mutex_unlock(&adev->pm.mutex);
-	} else if (adev->powerplay.pp_funcs->dispatch_tasks) {
-		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
-	} else {
-		mutex_lock(&adev->pm.mutex);
-		adev->pm.dpm.user_state = state;
-		mutex_unlock(&adev->pm.mutex);
+	amdgpu_dpm_set_power_state(adev, state);
 
-		amdgpu_pm_compute_clocks(adev);
-	}
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -290,10 +273,7 @@ static ssize_t amdgpu_get_power_dpm_force_performance_level(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_performance_level)
-		level = amdgpu_dpm_get_performance_level(adev);
-	else
-		level = adev->pm.dpm.forced_level;
+	level = amdgpu_dpm_get_performance_level(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -318,7 +298,6 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum amd_dpm_forced_level level;
 	enum amd_dpm_forced_level current_level;
 	int ret = 0;
@@ -358,11 +337,7 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_performance_level)
-		current_level = amdgpu_dpm_get_performance_level(adev);
-	else
-		current_level = adev->pm.dpm.forced_level;
-
+	current_level = amdgpu_dpm_get_performance_level(adev);
 	if (current_level == level) {
 		pm_runtime_mark_last_busy(ddev->dev);
 		pm_runtime_put_autosuspend(ddev->dev);
@@ -390,25 +365,12 @@ static ssize_t amdgpu_set_power_dpm_force_performance_level(struct device *dev,
 		return -EINVAL;
 	}
 
-	if (pp_funcs->force_performance_level) {
-		mutex_lock(&adev->pm.mutex);
-		if (adev->pm.dpm.thermal_active) {
-			mutex_unlock(&adev->pm.mutex);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-		ret = amdgpu_dpm_force_performance_level(adev, level);
-		if (ret) {
-			mutex_unlock(&adev->pm.mutex);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		} else {
-			adev->pm.dpm.forced_level = level;
-		}
-		mutex_unlock(&adev->pm.mutex);
+	if (amdgpu_dpm_force_performance_level(adev, level)) {
+		pm_runtime_mark_last_busy(ddev->dev);
+		pm_runtime_put_autosuspend(ddev->dev);
+		return -EINVAL;
 	}
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -421,7 +383,6 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_states_info data;
 	uint32_t i;
 	int buf_len, ret;
@@ -437,11 +398,8 @@ static ssize_t amdgpu_get_pp_num_states(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_pp_num_states) {
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-	} else {
+	if (amdgpu_dpm_get_pp_num_states(adev, &data))
 		memset(&data, 0, sizeof(data));
-	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -463,7 +421,6 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	struct pp_states_info data = {0};
 	enum amd_pm_state_type pm = 0;
 	int i = 0, ret = 0;
@@ -479,15 +436,16 @@ static ssize_t amdgpu_get_pp_cur_state(struct device *dev,
 		return ret;
 	}
 
-	if (pp_funcs->get_current_power_state
-		 && pp_funcs->get_pp_num_states) {
-		pm = amdgpu_dpm_get_current_power_state(adev);
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-	}
+	amdgpu_dpm_get_current_power_state(adev, &pm);
+
+	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return ret;
+
 	for (i = 0; i < data.nums; i++) {
 		if (pm == data.states[i])
 			break;
@@ -525,6 +483,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
 	enum amd_pm_state_type state = 0;
+	struct pp_states_info data;
 	unsigned long idx;
 	int ret;
 
@@ -533,41 +492,49 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if (strlen(buf) == 1)
-		adev->pp_force_state_enabled = false;
-	else if (is_support_sw_smu(adev))
-		adev->pp_force_state_enabled = false;
-	else if (adev->powerplay.pp_funcs->dispatch_tasks &&
-			adev->powerplay.pp_funcs->get_pp_num_states) {
-		struct pp_states_info data;
-
-		ret = kstrtoul(buf, 0, &idx);
-		if (ret || idx >= ARRAY_SIZE(data.states))
-			return -EINVAL;
+	adev->pp_force_state_enabled = false;
 
-		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+	if (strlen(buf) == 1)
+		return count;
 
-		amdgpu_dpm_get_pp_num_states(adev, &data);
-		state = data.states[idx];
+	ret = kstrtoul(buf, 0, &idx);
+	if (ret || idx >= ARRAY_SIZE(data.states))
+		return -EINVAL;
 
-		ret = pm_runtime_get_sync(ddev->dev);
-		if (ret < 0) {
-			pm_runtime_put_autosuspend(ddev->dev);
-			return ret;
-		}
+	idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
 
-		/* only set user selected power states */
-		if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
-		    state != POWER_STATE_TYPE_DEFAULT) {
-			amdgpu_dpm_dispatch_task(adev,
-					AMD_PP_TASK_ENABLE_USER_STATE, &state);
-			adev->pp_force_state_enabled = true;
-		}
-		pm_runtime_mark_last_busy(ddev->dev);
+	ret = pm_runtime_get_sync(ddev->dev);
+	if (ret < 0) {
 		pm_runtime_put_autosuspend(ddev->dev);
+		return ret;
+	}
+
+	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
+	if (ret)
+		goto err_out;
+
+	state = data.states[idx];
+
+	/* only set user selected power states */
+	if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
+	    state != POWER_STATE_TYPE_DEFAULT) {
+		ret = amdgpu_dpm_dispatch_task(adev,
+				AMD_PP_TASK_ENABLE_USER_STATE, &state);
+		if (ret)
+			goto err_out;
+
+		adev->pp_force_state_enabled = true;
 	}
 
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+
 	return count;
+
+err_out:
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+	return ret;
 }
 
 /**
@@ -601,17 +568,13 @@ static ssize_t amdgpu_get_pp_table(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_pp_table) {
-		size = amdgpu_dpm_get_pp_table(adev, &table);
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		if (size < 0)
-			return size;
-	} else {
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		return 0;
-	}
+	size = amdgpu_dpm_get_pp_table(adev, &table);
+
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+
+	if (size <= 0)
+		return size;
 
 	if (size >= PAGE_SIZE)
 		size = PAGE_SIZE - 1;
@@ -642,15 +605,13 @@ static ssize_t amdgpu_set_pp_table(struct device *dev,
 	}
 
 	ret = amdgpu_dpm_set_pp_table(adev, buf, count);
-	if (ret) {
-		pm_runtime_mark_last_busy(ddev->dev);
-		pm_runtime_put_autosuspend(ddev->dev);
-		return ret;
-	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return ret;
+
 	return count;
 }
 
@@ -866,46 +827,32 @@ static ssize_t amdgpu_set_pp_od_clk_voltage(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_fine_grain_clk_vol) {
-		ret = amdgpu_dpm_set_fine_grain_clk_vol(adev, type,
-							parameter,
-							parameter_size);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	if (amdgpu_dpm_set_fine_grain_clk_vol(adev,
+					      type,
+					      parameter,
+					      parameter_size))
+		goto err_out;
 
-	if (adev->powerplay.pp_funcs->odn_edit_dpm_table) {
-		ret = amdgpu_dpm_odn_edit_dpm_table(adev, type,
-						    parameter, parameter_size);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	if (amdgpu_dpm_odn_edit_dpm_table(adev, type,
+					  parameter, parameter_size))
+		goto err_out;
 
 	if (type == PP_OD_COMMIT_DPM_TABLE) {
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev,
-						 AMD_PP_TASK_READJUST_POWER_STATE,
-						 NULL);
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return count;
-		} else {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
+		if (amdgpu_dpm_dispatch_task(adev,
+					     AMD_PP_TASK_READJUST_POWER_STATE,
+					     NULL))
+			goto err_out;
 	}
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
 	return count;
+
+err_out:
+	pm_runtime_mark_last_busy(ddev->dev);
+	pm_runtime_put_autosuspend(ddev->dev);
+	return -EINVAL;
 }
 
 static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
@@ -928,8 +875,8 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->print_clock_levels) {
-		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+	size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+	if (size > 0) {
 		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf+size);
 		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf+size);
 		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf+size);
@@ -985,17 +932,14 @@ static ssize_t amdgpu_set_pp_features(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_ppfeature_status) {
-		ret = amdgpu_dpm_set_ppfeature_status(adev, featuremask);
-		if (ret) {
-			pm_runtime_mark_last_busy(ddev->dev);
-			pm_runtime_put_autosuspend(ddev->dev);
-			return -EINVAL;
-		}
-	}
+	ret = amdgpu_dpm_set_ppfeature_status(adev, featuremask);
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -1019,9 +963,8 @@ static ssize_t amdgpu_get_pp_features(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_ppfeature_status)
-		size = amdgpu_dpm_get_ppfeature_status(adev, buf);
-	else
+	size = amdgpu_dpm_get_ppfeature_status(adev, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1080,9 +1023,8 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->print_clock_levels)
-		size = amdgpu_dpm_print_clock_levels(adev, type, buf);
-	else
+	size = amdgpu_dpm_print_clock_levels(adev, type, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1151,10 +1093,7 @@ static ssize_t amdgpu_set_pp_dpm_clock(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->force_clock_level)
-		ret = amdgpu_dpm_force_clock_level(adev, type, mask);
-	else
-		ret = 0;
+	ret = amdgpu_dpm_force_clock_level(adev, type, mask);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1305,10 +1244,7 @@ static ssize_t amdgpu_get_pp_sclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev))
-		value = 0;
-	else if (adev->powerplay.pp_funcs->get_sclk_od)
-		value = amdgpu_dpm_get_sclk_od(adev);
+	value = amdgpu_dpm_get_sclk_od(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1342,19 +1278,7 @@ static ssize_t amdgpu_set_pp_sclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		value = 0;
-	} else {
-		if (adev->powerplay.pp_funcs->set_sclk_od)
-			amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
-
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_READJUST_POWER_STATE, NULL);
-		} else {
-			adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-			amdgpu_pm_compute_clocks(adev);
-		}
-	}
+	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1382,10 +1306,7 @@ static ssize_t amdgpu_get_pp_mclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev))
-		value = 0;
-	else if (adev->powerplay.pp_funcs->get_mclk_od)
-		value = amdgpu_dpm_get_mclk_od(adev);
+	value = amdgpu_dpm_get_mclk_od(adev);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1419,19 +1340,7 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
 		return ret;
 	}
 
-	if (is_support_sw_smu(adev)) {
-		value = 0;
-	} else {
-		if (adev->powerplay.pp_funcs->set_mclk_od)
-			amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
-
-		if (adev->powerplay.pp_funcs->dispatch_tasks) {
-			amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_READJUST_POWER_STATE, NULL);
-		} else {
-			adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
-			amdgpu_pm_compute_clocks(adev);
-		}
-	}
+	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1479,9 +1388,8 @@ static ssize_t amdgpu_get_pp_power_profile_mode(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_power_profile_mode)
-		size = amdgpu_dpm_get_power_profile_mode(adev, buf);
-	else
+	size = amdgpu_dpm_get_power_profile_mode(adev, buf);
+	if (size <= 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
@@ -1545,8 +1453,7 @@ static ssize_t amdgpu_set_pp_power_profile_mode(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->set_power_profile_mode)
-		ret = amdgpu_dpm_set_power_profile_mode(adev, parameter, parameter_size);
+	ret = amdgpu_dpm_set_power_profile_mode(adev, parameter, parameter_size);
 
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
@@ -1812,9 +1719,7 @@ static ssize_t amdgpu_get_gpu_metrics(struct device *dev,
 		return ret;
 	}
 
-	if (adev->powerplay.pp_funcs->get_gpu_metrics)
-		size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
-
+	size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
 	if (size <= 0)
 		goto out;
 
@@ -2053,7 +1958,6 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 {
 	struct device_attribute *dev_attr = &attr->dev_attr;
 	const char *attr_name = dev_attr->attr.name;
-	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
 	enum amd_asic_type asic_type = adev->asic_type;
 
 	if (!(attr->flags & mask)) {
@@ -2076,9 +1980,7 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 			*states = ATTR_STATE_UNSUPPORTED;
 	} else if (DEVICE_ATTR_IS(pp_od_clk_voltage)) {
 		*states = ATTR_STATE_UNSUPPORTED;
-		if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
-		    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
-			(!is_support_sw_smu(adev) && hwmgr->od_enabled))
+		if (amdgpu_dpm_is_overdrive_supported(adev))
 			*states = ATTR_STATE_SUPPORTED;
 	} else if (DEVICE_ATTR_IS(mem_busy_percent)) {
 		if (adev->flags & AMD_IS_APU || asic_type == CHIP_VEGA10)
@@ -2105,8 +2007,7 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
 		if (!(asic_type == CHIP_VANGOGH || asic_type == CHIP_SIENNA_CICHLID))
 			*states = ATTR_STATE_UNSUPPORTED;
 	} else if (DEVICE_ATTR_IS(pp_power_profile_mode)) {
-		if (!adev->powerplay.pp_funcs->get_power_profile_mode ||
-		    amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
+		if (amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
 			*states = ATTR_STATE_UNSUPPORTED;
 	}
 
@@ -2389,17 +2290,14 @@ static ssize_t amdgpu_hwmon_get_pwm1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return sysfs_emit(buf, "%u\n", pwm_mode);
 }
 
@@ -2427,17 +2325,14 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	amdgpu_dpm_set_fan_control_mode(adev, value);
+	ret = amdgpu_dpm_set_fan_control_mode(adev, value);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -2469,32 +2364,29 @@ static ssize_t amdgpu_hwmon_set_pwm1(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
+	err = kstrtou32(buf, 10, &value);
+	if (err)
+		return err;
+
 	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (err < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return err;
 	}
 
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
+	if (err)
+		goto out;
+
 	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
 		pr_info("manual fan speed control should be enabled first\n");
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
+		err = -EINVAL;
+		goto out;
 	}
 
-	err = kstrtou32(buf, 10, &value);
-	if (err) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return err;
-	}
-
-	if (adev->powerplay.pp_funcs->set_fan_speed_pwm)
-		err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
 
+out:
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
@@ -2523,10 +2415,7 @@ static ssize_t amdgpu_hwmon_get_pwm1(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_pwm)
-		err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2556,10 +2445,7 @@ static ssize_t amdgpu_hwmon_get_fan1_input(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
-		err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2653,10 +2539,7 @@ static ssize_t amdgpu_hwmon_get_fan1_target(struct device *dev,
 		return err;
 	}
 
-	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
-		err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -2681,32 +2564,28 @@ static ssize_t amdgpu_hwmon_set_fan1_target(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
+	err = kstrtou32(buf, 10, &value);
+	if (err)
+		return err;
+
 	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (err < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return err;
 	}
 
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
+	if (err)
+		goto out;
 
 	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -ENODATA;
-	}
-
-	err = kstrtou32(buf, 10, &value);
-	if (err) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return err;
+		err = -ENODATA;
+		goto out;
 	}
 
-	if (adev->powerplay.pp_funcs->set_fan_speed_rpm)
-		err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
 
+out:
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
@@ -2735,17 +2614,14 @@ static ssize_t amdgpu_hwmon_get_fan1_enable(struct device *dev,
 		return ret;
 	}
 
-	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-
-	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (ret)
+		return -EINVAL;
+
 	return sysfs_emit(buf, "%i\n", pwm_mode == AMD_FAN_CTRL_AUTO ? 0 : 1);
 }
 
@@ -2781,16 +2657,14 @@ static ssize_t amdgpu_hwmon_set_fan1_enable(struct device *dev,
 		return err;
 	}
 
-	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
-		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
-		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
-		return -EINVAL;
-	}
-	amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
+	err = amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 
+	if (err)
+		return -EINVAL;
+
 	return count;
 }
 
@@ -2926,7 +2800,6 @@ static ssize_t amdgpu_hwmon_show_power_cap_generic(struct device *dev,
 					enum pp_power_limit_level pp_limit_level)
 {
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	enum pp_power_type power_type = to_sensor_dev_attr(attr)->index;
 	uint32_t limit;
 	ssize_t size;
@@ -2937,16 +2810,13 @@ static ssize_t amdgpu_hwmon_show_power_cap_generic(struct device *dev,
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if ( !(pp_funcs && pp_funcs->get_power_limit))
-		return -ENODATA;
-
 	r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
 	if (r < 0) {
 		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
 		return r;
 	}
 
-	r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
+	r = amdgpu_dpm_get_power_limit(adev, &limit,
 				      pp_limit_level, power_type);
 
 	if (!r)
@@ -3001,7 +2871,6 @@ static ssize_t amdgpu_hwmon_set_power_cap(struct device *dev,
 		size_t count)
 {
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
 	int limit_type = to_sensor_dev_attr(attr)->index;
 	int err;
 	u32 value;
@@ -3027,10 +2896,7 @@ static ssize_t amdgpu_hwmon_set_power_cap(struct device *dev,
 		return err;
 	}
 
-	if (pp_funcs && pp_funcs->set_power_limit)
-		err = pp_funcs->set_power_limit(adev->powerplay.pp_handle, value);
-	else
-		err = -EINVAL;
+	err = amdgpu_dpm_set_power_limit(adev, value);
 
 	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
 	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
@@ -3303,6 +3169,7 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	struct device *dev = kobj_to_dev(kobj);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	umode_t effective_mode = attr->mode;
+	uint32_t speed = 0;
 
 	/* under multi-vf mode, the hwmon attributes are all not supported */
 	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
@@ -3367,20 +3234,18 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	     attr == &sensor_dev_attr_fan1_enable.dev_attr.attr))
 		return 0;
 
-	if (!is_support_sw_smu(adev)) {
-		/* mask fan attributes if we have no bindings for this asic to expose */
-		if ((!adev->powerplay.pp_funcs->get_fan_speed_pwm &&
-		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't query fan */
-		    (!adev->powerplay.pp_funcs->get_fan_control_mode &&
-		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't query state */
-			effective_mode &= ~S_IRUGO;
+	/* mask fan attributes if we have no bindings for this asic to expose */
+	if (((amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL) &&
+	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't query fan */
+	    ((amdgpu_dpm_get_fan_control_mode(adev, &speed) == -EOPNOTSUPP) &&
+	     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't query state */
+		effective_mode &= ~S_IRUGO;
 
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
-		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't manage fan */
-		    (!adev->powerplay.pp_funcs->set_fan_control_mode &&
-		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't manage state */
-			effective_mode &= ~S_IWUSR;
-	}
+	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL) &&
+	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't manage fan */
+	      ((amdgpu_dpm_set_fan_control_mode(adev, speed) == -EOPNOTSUPP) &&
+	      attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't manage state */
+		effective_mode &= ~S_IWUSR;
 
 	if (((adev->family == AMDGPU_FAMILY_SI) ||
 		 ((adev->flags & AMD_IS_APU) &&
@@ -3397,22 +3262,20 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	    (attr == &sensor_dev_attr_power1_average.dev_attr.attr))
 		return 0;
 
-	if (!is_support_sw_smu(adev)) {
-		/* hide max/min values if we can't both query and manage the fan */
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_pwm) &&
-		     (!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
-		    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
-		     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
-			return 0;
+	/* hide max/min values if we can't both query and manage the fan */
+	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL) &&
+	      (amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL) &&
+	      (amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL) &&
+	      (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL)) &&
+	    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
+	     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
+		return 0;
 
-		if ((!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
-		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
-		    (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
-		     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
-			return 0;
-	}
+	if ((amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL) &&
+	     (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL) &&
+	     (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
+	     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
+		return 0;
 
 	if ((adev->family == AMDGPU_FAMILY_SI ||	/* not implemented yet */
 	     adev->family == AMDGPU_FAMILY_KV) &&	/* not implemented yet */
@@ -3542,14 +3405,15 @@ static void amdgpu_debugfs_prints_cpu_info(struct seq_file *m,
 	uint16_t *p_val;
 	uint32_t size;
 	int i;
+	uint32_t num_cpu_cores = amdgpu_dpm_get_num_cpu_cores(adev);
 
-	if (is_support_cclk_dpm(adev)) {
-		p_val = kcalloc(adev->smu.cpu_core_num, sizeof(uint16_t),
+	if (amdgpu_dpm_is_cclk_dpm_supported(adev)) {
+		p_val = kcalloc(num_cpu_cores, sizeof(uint16_t),
 				GFP_KERNEL);
 
 		if (!amdgpu_dpm_read_sensor(adev, AMDGPU_PP_SENSOR_CPU_CLK,
 					    (void *)p_val, &size)) {
-			for (i = 0; i < adev->smu.cpu_core_num; i++)
+			for (i = 0; i < num_cpu_cores; i++)
 				seq_printf(m, "\t%u MHz (CPU%d)\n",
 					   *(p_val + i), i);
 		}
@@ -3677,27 +3541,11 @@ static int amdgpu_debugfs_pm_info_show(struct seq_file *m, void *unused)
 		return r;
 	}
 
-	if (!adev->pm.dpm_enabled) {
-		seq_printf(m, "dpm not enabled\n");
-		pm_runtime_mark_last_busy(dev->dev);
-		pm_runtime_put_autosuspend(dev->dev);
-		return 0;
-	}
-
-	if (!is_support_sw_smu(adev) &&
-	    adev->powerplay.pp_funcs->debugfs_print_current_performance_level) {
-		mutex_lock(&adev->pm.mutex);
-		if (adev->powerplay.pp_funcs->debugfs_print_current_performance_level)
-			adev->powerplay.pp_funcs->debugfs_print_current_performance_level(adev, m);
-		else
-			seq_printf(m, "Debugfs support not implemented for this asic\n");
-		mutex_unlock(&adev->pm.mutex);
-		r = 0;
-	} else {
+	if (amdgpu_dpm_debugfs_print_current_performance_level(adev, m)) {
 		r = amdgpu_debugfs_pm_info_pp(m, adev);
+		if (r)
+			goto out;
 	}
-	if (r)
-		goto out;
 
 	amdgpu_device_ip_get_clockgating_state(adev, &flags);
 
@@ -3723,21 +3571,18 @@ static ssize_t amdgpu_pm_prv_buffer_read(struct file *f, char __user *buf,
 					 size_t size, loff_t *pos)
 {
 	struct amdgpu_device *adev = file_inode(f)->i_private;
-	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
-	void *pp_handle = adev->powerplay.pp_handle;
 	size_t smu_prv_buf_size;
 	void *smu_prv_buf;
+	int ret = 0;
 
 	if (amdgpu_in_reset(adev))
 		return -EPERM;
 	if (adev->in_suspend && !adev->in_runpm)
 		return -EPERM;
 
-	if (pp_funcs && pp_funcs->get_smu_prv_buf_details)
-		pp_funcs->get_smu_prv_buf_details(pp_handle, &smu_prv_buf,
-						  &smu_prv_buf_size);
-	else
-		return -ENOSYS;
+	ret = amdgpu_dpm_get_smu_prv_buf_details(adev, &smu_prv_buf, &smu_prv_buf_size);
+	if (ret)
+		return ret;
 
 	if (!smu_prv_buf || !smu_prv_buf_size)
 		return -EINVAL;
@@ -3770,6 +3615,6 @@ void amdgpu_debugfs_pm_init(struct amdgpu_device *adev)
 					 &amdgpu_debugfs_pm_prv_buffer_fops,
 					 adev->pm.smu_prv_buffer_size);
 
-	amdgpu_smu_stb_debug_fs_init(adev);
+	amdgpu_dpm_stb_debug_fs_init(adev);
 #endif
 }
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index 7289d379a9fb..039c40b1d0cb 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -262,9 +262,6 @@ enum amdgpu_pcie_gen {
 #define amdgpu_dpm_pre_set_power_state(adev) \
 		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
 
-#define amdgpu_dpm_set_power_state(adev) \
-		((adev)->powerplay.pp_funcs->set_power_state((adev)->powerplay.pp_handle))
-
 #define amdgpu_dpm_post_set_power_state(adev) \
 		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
 
@@ -280,100 +277,13 @@ enum amdgpu_pcie_gen {
 #define amdgpu_dpm_enable_bapm(adev, e) \
 		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
 
-#define amdgpu_dpm_set_fan_control_mode(adev, m) \
-		((adev)->powerplay.pp_funcs->set_fan_control_mode((adev)->powerplay.pp_handle, (m)))
-
-#define amdgpu_dpm_get_fan_control_mode(adev) \
-		((adev)->powerplay.pp_funcs->get_fan_control_mode((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_fan_speed_pwm(adev, s) \
-		((adev)->powerplay.pp_funcs->set_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
-
-#define amdgpu_dpm_get_fan_speed_pwm(adev, s) \
-		((adev)->powerplay.pp_funcs->get_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
-
-#define amdgpu_dpm_get_fan_speed_rpm(adev, s) \
-		((adev)->powerplay.pp_funcs->get_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
-
-#define amdgpu_dpm_set_fan_speed_rpm(adev, s) \
-		((adev)->powerplay.pp_funcs->set_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
-
-#define amdgpu_dpm_force_performance_level(adev, l) \
-		((adev)->powerplay.pp_funcs->force_performance_level((adev)->powerplay.pp_handle, (l)))
-
-#define amdgpu_dpm_get_current_power_state(adev) \
-		((adev)->powerplay.pp_funcs->get_current_power_state((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_get_pp_num_states(adev, data) \
-		((adev)->powerplay.pp_funcs->get_pp_num_states((adev)->powerplay.pp_handle, data))
-
-#define amdgpu_dpm_get_pp_table(adev, table) \
-		((adev)->powerplay.pp_funcs->get_pp_table((adev)->powerplay.pp_handle, table))
-
-#define amdgpu_dpm_set_pp_table(adev, buf, size) \
-		((adev)->powerplay.pp_funcs->set_pp_table((adev)->powerplay.pp_handle, buf, size))
-
-#define amdgpu_dpm_print_clock_levels(adev, type, buf) \
-		((adev)->powerplay.pp_funcs->print_clock_levels((adev)->powerplay.pp_handle, type, buf))
-
-#define amdgpu_dpm_force_clock_level(adev, type, level) \
-		((adev)->powerplay.pp_funcs->force_clock_level((adev)->powerplay.pp_handle, type, level))
-
-#define amdgpu_dpm_get_sclk_od(adev) \
-		((adev)->powerplay.pp_funcs->get_sclk_od((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_sclk_od(adev, value) \
-		((adev)->powerplay.pp_funcs->set_sclk_od((adev)->powerplay.pp_handle, value))
-
-#define amdgpu_dpm_get_mclk_od(adev) \
-		((adev)->powerplay.pp_funcs->get_mclk_od((adev)->powerplay.pp_handle))
-
-#define amdgpu_dpm_set_mclk_od(adev, value) \
-		((adev)->powerplay.pp_funcs->set_mclk_od((adev)->powerplay.pp_handle, value))
-
-#define amdgpu_dpm_dispatch_task(adev, task_id, user_state)		\
-		((adev)->powerplay.pp_funcs->dispatch_tasks)((adev)->powerplay.pp_handle, (task_id), (user_state))
-
 #define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
 		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
 
-#define amdgpu_dpm_get_vce_clock_state(adev, i)				\
-		((adev)->powerplay.pp_funcs->get_vce_clock_state((adev)->powerplay.pp_handle, (i)))
-
-#define amdgpu_dpm_get_performance_level(adev)				\
-		((adev)->powerplay.pp_funcs->get_performance_level((adev)->powerplay.pp_handle))
-
 #define amdgpu_dpm_reset_power_profile_state(adev, request) \
 		((adev)->powerplay.pp_funcs->reset_power_profile_state(\
 			(adev)->powerplay.pp_handle, request))
 
-#define amdgpu_dpm_get_power_profile_mode(adev, buf) \
-		((adev)->powerplay.pp_funcs->get_power_profile_mode(\
-			(adev)->powerplay.pp_handle, buf))
-
-#define amdgpu_dpm_set_power_profile_mode(adev, parameter, size) \
-		((adev)->powerplay.pp_funcs->set_power_profile_mode(\
-			(adev)->powerplay.pp_handle, parameter, size))
-
-#define amdgpu_dpm_set_fine_grain_clk_vol(adev, type, parameter, size) \
-		((adev)->powerplay.pp_funcs->set_fine_grain_clk_vol(\
-			(adev)->powerplay.pp_handle, type, parameter, size))
-
-#define amdgpu_dpm_odn_edit_dpm_table(adev, type, parameter, size) \
-		((adev)->powerplay.pp_funcs->odn_edit_dpm_table(\
-			(adev)->powerplay.pp_handle, type, parameter, size))
-
-#define amdgpu_dpm_get_ppfeature_status(adev, buf) \
-		((adev)->powerplay.pp_funcs->get_ppfeature_status(\
-			(adev)->powerplay.pp_handle, (buf)))
-
-#define amdgpu_dpm_set_ppfeature_status(adev, ppfeatures) \
-		((adev)->powerplay.pp_funcs->set_ppfeature_status(\
-			(adev)->powerplay.pp_handle, (ppfeatures)))
-
-#define amdgpu_dpm_get_gpu_metrics(adev, table) \
-		((adev)->powerplay.pp_funcs->get_gpu_metrics((adev)->powerplay.pp_handle, table))
-
 struct amdgpu_dpm {
 	struct amdgpu_ps        *ps;
 	/* number of valid power states */
@@ -598,4 +508,74 @@ void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
 				 enum gfx_change_state state);
 int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
 			    void *umc_ecc);
+struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
+						     uint32_t idx);
+void amdgpu_dpm_get_current_power_state(struct amdgpu_device *adev, enum amd_pm_state_type *state);
+void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
+				enum amd_pm_state_type state);
+enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev);
+int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
+				       enum amd_dpm_forced_level level);
+int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
+				 struct pp_states_info *states);
+int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
+			      enum amd_pp_task task_id,
+			      enum amd_pm_state_type *user_state);
+int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table);
+int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
+				      uint32_t type,
+				      long *input,
+				      uint32_t size);
+int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
+				  uint32_t type,
+				  long *input,
+				  uint32_t size);
+int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf);
+int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
+				    uint64_t ppfeature_masks);
+int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf);
+int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
+				 enum pp_clock_type type,
+				 uint32_t mask);
+int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev);
+int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value);
+int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev);
+int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value);
+int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
+				      char *buf);
+int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
+				      long *input, uint32_t size);
+int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void **table);
+int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t *fan_mode);
+int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t speed);
+int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
+				 uint32_t *speed);
+int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t *speed);
+int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
+				 uint32_t speed);
+int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
+				    uint32_t mode);
+int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
+			       uint32_t *limit,
+			       enum pp_power_limit_level pp_limit_level,
+			       enum pp_power_type power_type);
+int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
+			       uint32_t limit);
+int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev);
+int amdgpu_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
+						       struct seq_file *m);
+int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
+				       void **addr,
+				       size_t *size);
+int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev);
+int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
+			    const char *buf,
+			    size_t size);
+int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev);
+void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev);
 #endif
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index ef7d0e377965..eaed5aba7547 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -470,9 +470,6 @@ bool is_support_cclk_dpm(struct amdgpu_device *adev)
 {
 	struct smu_context *smu = &adev->smu;
 
-	if (!is_support_sw_smu(adev))
-		return false;
-
 	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
 		return false;
 
-- 
2.29.0

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-11-30  7:42 ` [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv) Evan Quan
@ 2021-11-30 13:21   ` Lazar, Lijo
  2021-12-01  3:13     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 13:21 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> Those APIs are used only by legacy ASICs(si/kv). They cannot be
> shared by other ASICs. So, we create a new holder for them.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
> --
> v1->v2:
>    - move other APIs used by si/kv in amdgpu_atombios.c to the new
>      holder also(Alex)
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
>   .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
>   drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
>   drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
>   drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453 +++++++++++++++++
>   drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
>   drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
>   10 files changed, 1534 insertions(+), 1470 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>   create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> index 12a6b1c99c93..f2e447212e62 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> @@ -1083,427 +1083,6 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>   	return 0;
>   }
>   
> -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
> -					    u32 clock,
> -					    bool strobe_mode,
> -					    struct atom_mpll_param *mpll_param)
> -{
> -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> -	int index = GetIndexIntoMasterTable(COMMAND, ComputeMemoryClockParam);
> -	u8 frev, crev;
> -
> -	memset(&args, 0, sizeof(args));
> -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> -
> -	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
> -		return -EINVAL;
> -
> -	switch (frev) {
> -	case 2:
> -		switch (crev) {
> -		case 1:
> -			/* SI */
> -			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> -			args.ucInputFlag = 0;
> -			if (strobe_mode)
> -				args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
> -
> -			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> -
> -			mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> -			mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
> -			mpll_param->post_div = args.ucPostDiv;
> -			mpll_param->dll_speed = args.ucDllSpeed;
> -			mpll_param->bwcntl = args.ucBWCntl;
> -			mpll_param->vco_mode =
> -				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
> -			mpll_param->yclk_sel =
> -				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> -			mpll_param->qdr =
> -				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> -			mpll_param->half_rate =
> -				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> -			break;
> -		default:
> -			return -EINVAL;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -	return 0;
> -}
> -
> -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
> -					     u32 eng_clock, u32 mem_clock)
> -{
> -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> -	int index = GetIndexIntoMasterTable(COMMAND, DynamicMemorySettings);
> -	u32 tmp;
> -
> -	memset(&args, 0, sizeof(args));
> -
> -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> -
> -	args.ulTargetEngineClock = cpu_to_le32(tmp);
> -	if (mem_clock)
> -		args.sReserved.ulClock = cpu_to_le32(mem_clock & SET_CLOCK_FREQ_MASK);
> -
> -	amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> -}
> -
> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
> -					  u16 *vddc, u16 *vddci, u16 *mvdd)
> -{
> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> -	u8 frev, crev;
> -	u16 data_offset;
> -	union firmware_info *firmware_info;
> -
> -	*vddc = 0;
> -	*vddci = 0;
> -	*mvdd = 0;
> -
> -	if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> -				   &frev, &crev, &data_offset)) {
> -		firmware_info =
> -			(union firmware_info *)(mode_info->atom_context->bios +
> -						data_offset);
> -		*vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage);
> -		if ((frev == 2) && (crev >= 2)) {
> -			*vddci = le16_to_cpu(firmware_info->info_22.usBootUpVDDCIVoltage);
> -			*mvdd = le16_to_cpu(firmware_info->info_22.usBootUpMVDDCVoltage);
> -		}
> -	}
> -}
> -
> -union set_voltage {
> -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> -	struct _SET_VOLTAGE_PARAMETERS v1;
> -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> -};
> -
> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
> -			     u16 voltage_id, u16 *voltage)
> -{
> -	union set_voltage args;
> -	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> -	u8 frev, crev;
> -
> -	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
> -		return -EINVAL;
> -
> -	switch (crev) {
> -	case 1:
> -		return -EINVAL;
> -	case 2:
> -		args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE;
> -		args.v2.ucVoltageMode = 0;
> -		args.v2.usVoltageLevel = 0;
> -
> -		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> -
> -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> -		break;
> -	case 3:
> -		args.v3.ucVoltageType = voltage_type;
> -		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> -
> -		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> -
> -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> -		break;
> -	default:
> -		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> -		return -EINVAL;
> -	}
> -
> -	return 0;
> -}
> -
> -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
> -						      u16 *voltage,
> -						      u16 leakage_idx)
> -{
> -	return amdgpu_atombios_get_max_vddc(adev, VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> -}
> -
> -union voltage_object_info {
> -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> -};
> -
> -union voltage_object {
> -	struct _ATOM_VOLTAGE_OBJECT v1;
> -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> -};
> -
> -
> -static ATOM_VOLTAGE_OBJECT_V3 *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_INFO_V3_1 *v3,
> -									u8 voltage_type, u8 voltage_mode)
> -{
> -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1, asVoltageObj[0]);
> -	u8 *start = (u8 *)v3;
> -
> -	while (offset < size) {
> -		ATOM_VOLTAGE_OBJECT_V3 *vo = (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType == voltage_type) &&
> -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode == voltage_mode))
> -			return vo;
> -		offset += le16_to_cpu(vo->asGpioVoltageObj.sHeader.usSize);
> -	}
> -	return NULL;
> -}
> -
> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> -			      u8 voltage_type,
> -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> -{
> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> -	u8 frev, crev;
> -	u16 data_offset, size;
> -	union voltage_object_info *voltage_info;
> -	union voltage_object *voltage_object = NULL;
> -
> -	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> -				   &frev, &crev, &data_offset)) {
> -		voltage_info = (union voltage_object_info *)
> -			(adev->mode_info.atom_context->bios + data_offset);
> -
> -		switch (frev) {
> -		case 3:
> -			switch (crev) {
> -			case 1:
> -				voltage_object = (union voltage_object *)
> -					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> -								      voltage_type,
> -								      VOLTAGE_OBJ_SVID2);
> -				if (voltage_object) {
> -					*svd_gpio_id = voltage_object->v3.asSVID2Obj.ucSVDGpioId;
> -					*svc_gpio_id = voltage_object->v3.asSVID2Obj.ucSVCGpioId;
> -				} else {
> -					return -EINVAL;
> -				}
> -				break;
> -			default:
> -				DRM_ERROR("unknown voltage object table\n");
> -				return -EINVAL;
> -			}
> -			break;
> -		default:
> -			DRM_ERROR("unknown voltage object table\n");
> -			return -EINVAL;
> -		}
> -
> -	}
> -	return 0;
> -}
> -
> -bool
> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> -				u8 voltage_type, u8 voltage_mode)
> -{
> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> -	u8 frev, crev;
> -	u16 data_offset, size;
> -	union voltage_object_info *voltage_info;
> -
> -	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> -				   &frev, &crev, &data_offset)) {
> -		voltage_info = (union voltage_object_info *)
> -			(adev->mode_info.atom_context->bios + data_offset);
> -
> -		switch (frev) {
> -		case 3:
> -			switch (crev) {
> -			case 1:
> -				if (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> -								  voltage_type, voltage_mode))
> -					return true;
> -				break;
> -			default:
> -				DRM_ERROR("unknown voltage object table\n");
> -				return false;
> -			}
> -			break;
> -		default:
> -			DRM_ERROR("unknown voltage object table\n");
> -			return false;
> -		}
> -
> -	}
> -	return false;
> -}
> -
> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> -				      u8 voltage_type, u8 voltage_mode,
> -				      struct atom_voltage_table *voltage_table)
> -{
> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> -	u8 frev, crev;
> -	u16 data_offset, size;
> -	int i;
> -	union voltage_object_info *voltage_info;
> -	union voltage_object *voltage_object = NULL;
> -
> -	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> -				   &frev, &crev, &data_offset)) {
> -		voltage_info = (union voltage_object_info *)
> -			(adev->mode_info.atom_context->bios + data_offset);
> -
> -		switch (frev) {
> -		case 3:
> -			switch (crev) {
> -			case 1:
> -				voltage_object = (union voltage_object *)
> -					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> -								      voltage_type, voltage_mode);
> -				if (voltage_object) {
> -					ATOM_GPIO_VOLTAGE_OBJECT_V3 *gpio =
> -						&voltage_object->v3.asGpioVoltageObj;
> -					VOLTAGE_LUT_ENTRY_V2 *lut;
> -					if (gpio->ucGpioEntryNum > MAX_VOLTAGE_ENTRIES)
> -						return -EINVAL;
> -					lut = &gpio->asVolGpioLut[0];
> -					for (i = 0; i < gpio->ucGpioEntryNum; i++) {
> -						voltage_table->entries[i].value =
> -							le16_to_cpu(lut->usVoltageValue);
> -						voltage_table->entries[i].smio_low =
> -							le32_to_cpu(lut->ulVoltageId);
> -						lut = (VOLTAGE_LUT_ENTRY_V2 *)
> -							((u8 *)lut + sizeof(VOLTAGE_LUT_ENTRY_V2));
> -					}
> -					voltage_table->mask_low = le32_to_cpu(gpio->ulGpioMaskVal);
> -					voltage_table->count = gpio->ucGpioEntryNum;
> -					voltage_table->phase_delay = gpio->ucPhaseDelay;
> -					return 0;
> -				}
> -				break;
> -			default:
> -				DRM_ERROR("unknown voltage object table\n");
> -				return -EINVAL;
> -			}
> -			break;
> -		default:
> -			DRM_ERROR("unknown voltage object table\n");
> -			return -EINVAL;
> -		}
> -	}
> -	return -EINVAL;
> -}
> -
> -union vram_info {
> -	struct _ATOM_VRAM_INFO_V3 v1_3;
> -	struct _ATOM_VRAM_INFO_V4 v1_4;
> -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> -};
> -
> -#define MEM_ID_MASK           0xff000000
> -#define MEM_ID_SHIFT          24
> -#define CLOCK_RANGE_MASK      0x00ffffff
> -#define CLOCK_RANGE_SHIFT     0
> -#define LOW_NIBBLE_MASK       0xf
> -#define DATA_EQU_PREV         0
> -#define DATA_FROM_TABLE       4
> -
> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> -				      u8 module_index,
> -				      struct atom_mc_reg_table *reg_table)
> -{
> -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> -	u32 i = 0, j;
> -	u16 data_offset, size;
> -	union vram_info *vram_info;
> -
> -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> -
> -	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> -				   &frev, &crev, &data_offset)) {
> -		vram_info = (union vram_info *)
> -			(adev->mode_info.atom_context->bios + data_offset);
> -		switch (frev) {
> -		case 1:
> -			DRM_ERROR("old table version %d, %d\n", frev, crev);
> -			return -EINVAL;
> -		case 2:
> -			switch (crev) {
> -			case 1:
> -				if (module_index < vram_info->v2_1.ucNumOfVRAMModule) {
> -					ATOM_INIT_REG_BLOCK *reg_block =
> -						(ATOM_INIT_REG_BLOCK *)
> -						((u8 *)vram_info + le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> -					ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> -						(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> -						((u8 *)reg_block + (2 * sizeof(u16)) +
> -						 le16_to_cpu(reg_block->usRegIndexTblSize));
> -					ATOM_INIT_REG_INDEX_FORMAT *format = &reg_block->asRegIndexBuf[0];
> -					num_entries = (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> -							   sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> -					if (num_entries > VBIOS_MC_REGISTER_ARRAY_SIZE)
> -						return -EINVAL;
> -					while (i < num_entries) {
> -						if (format->ucPreRegDataLength & ACCESS_PLACEHOLDER)
> -							break;
> -						reg_table->mc_reg_address[i].s1 =
> -							(u16)(le16_to_cpu(format->usRegIndex));
> -						reg_table->mc_reg_address[i].pre_reg_data =
> -							(u8)(format->ucPreRegDataLength);
> -						i++;
> -						format = (ATOM_INIT_REG_INDEX_FORMAT *)
> -							((u8 *)format + sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> -					}
> -					reg_table->last = i;
> -					while ((le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> -					       (num_ranges < VBIOS_MAX_AC_TIMING_ENTRIES)) {
> -						t_mem_id = (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> -								>> MEM_ID_SHIFT);
> -						if (module_index == t_mem_id) {
> -							reg_table->mc_reg_table_entry[num_ranges].mclk_max =
> -								(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> -								      >> CLOCK_RANGE_SHIFT);
> -							for (i = 0, j = 1; i < reg_table->last; i++) {
> -								if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_FROM_TABLE) {
> -									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> -										(u32)le32_to_cpu(*((u32 *)reg_data + j));
> -									j++;
> -								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
> -									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> -										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> -								}
> -							}
> -							num_ranges++;
> -						}
> -						reg_data = (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> -							((u8 *)reg_data + le16_to_cpu(reg_block->usRegDataBlkSize));
> -					}
> -					if (le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK)
> -						return -EINVAL;
> -					reg_table->num_entries = num_ranges;
> -				} else
> -					return -EINVAL;
> -				break;
> -			default:
> -				DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> -				return -EINVAL;
> -			}
> -			break;
> -		default:
> -			DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> -			return -EINVAL;
> -		}
> -		return 0;
> -	}
> -	return -EINVAL;
> -}
> -
>   bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev)
>   {
>   	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> index 27e74b1fc260..cb5649298dcb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> @@ -160,26 +160,6 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>   				       bool strobe_mode,
>   				       struct atom_clock_dividers *dividers);
>   
> -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
> -					    u32 clock,
> -					    bool strobe_mode,
> -					    struct atom_mpll_param *mpll_param);
> -
> -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
> -					     u32 eng_clock, u32 mem_clock);
> -
> -bool
> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> -				u8 voltage_type, u8 voltage_mode);
> -
> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> -				      u8 voltage_type, u8 voltage_mode,
> -				      struct atom_voltage_table *voltage_table);
> -
> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> -				      u8 module_index,
> -				      struct atom_mc_reg_table *reg_table);
> -
>   bool amdgpu_atombios_has_gpu_virtualization_table(struct amdgpu_device *adev);
>   
>   void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev, bool lock);
> @@ -190,21 +170,11 @@ void amdgpu_atombios_scratch_regs_set_backlight_level(struct amdgpu_device *adev
>   bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device *adev);
>   
>   void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
> -			     u16 voltage_id, u16 *voltage);
> -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
> -						      u16 *voltage,
> -						      u16 leakage_idx);
> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
> -					  u16 *vddc, u16 *vddci, u16 *mvdd);
>   int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>   				       u8 clock_type,
>   				       u32 clock,
>   				       bool strobe_mode,
>   				       struct atom_clock_dividers *dividers);
> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> -			      u8 voltage_type,
> -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
>   
>   int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
>   				   uint32_t table,


Whether used in legacy or new logic, atombios table parsing/execution 
should be kept as separate logic. These shouldn't be moved along with dpm.


> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> index 2e295facd086..cdf724dcf832 100644
> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> @@ -404,6 +404,7 @@ struct amd_pm_funcs {
>   	int (*get_dpm_clock_table)(void *handle,
>   				   struct dpm_clocks *clock_table);
>   	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t *size);
> +	int (*change_power_state)(void *handle);
>   };
>   
>   struct metrics_table_header {
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index ecaf0081bc31..c6801d10cde6 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -34,113 +34,9 @@
>   
>   #define WIDTH_4K 3840
>   
> -#define amdgpu_dpm_pre_set_power_state(adev) \
> -		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
> -
> -#define amdgpu_dpm_post_set_power_state(adev) \
> -		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
> -
> -#define amdgpu_dpm_display_configuration_changed(adev) \
> -		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
> -
> -#define amdgpu_dpm_print_power_state(adev, ps) \
> -		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
> -
> -#define amdgpu_dpm_vblank_too_short(adev) \
> -		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
> -
>   #define amdgpu_dpm_enable_bapm(adev, e) \
>   		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
>   
> -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> -		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
> -
> -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> -{
> -	const char *s;
> -
> -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> -	default:
> -		s = "none";
> -		break;
> -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> -		s = "battery";
> -		break;
> -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> -		s = "balanced";
> -		break;
> -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> -		s = "performance";
> -		break;
> -	}
> -	printk("\tui class: %s\n", s);
> -	printk("\tinternal class:");
> -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> -	    (class2 == 0))
> -		pr_cont(" none");
> -	else {
> -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> -			pr_cont(" boot");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> -			pr_cont(" thermal");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> -			pr_cont(" limited_pwr");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> -			pr_cont(" rest");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> -			pr_cont(" forced");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> -			pr_cont(" 3d_perf");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> -			pr_cont(" ovrdrv");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> -			pr_cont(" uvd");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> -			pr_cont(" 3d_low");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> -			pr_cont(" acpi");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> -			pr_cont(" uvd_hd2");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> -			pr_cont(" uvd_hd");
> -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> -			pr_cont(" uvd_sd");
> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> -			pr_cont(" limited_pwr2");
> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> -			pr_cont(" ulv");
> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> -			pr_cont(" uvd_mvc");
> -	}
> -	pr_cont("\n");
> -}
> -
> -void amdgpu_dpm_print_cap_info(u32 caps)
> -{
> -	printk("\tcaps:");
> -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> -		pr_cont(" single_disp");
> -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> -		pr_cont(" video");
> -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> -		pr_cont(" no_dc");
> -	pr_cont("\n");
> -}
> -
> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> -				struct amdgpu_ps *rps)
> -{
> -	printk("\tstatus:");
> -	if (rps == adev->pm.dpm.current_ps)
> -		pr_cont(" c");
> -	if (rps == adev->pm.dpm.requested_ps)
> -		pr_cont(" r");
> -	if (rps == adev->pm.dpm.boot_ps)
> -		pr_cont(" b");
> -	pr_cont("\n");
> -}
> -
>   static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
>   {
>   	struct drm_device *ddev = adev_to_drm(adev);
> @@ -161,7 +57,6 @@ static void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev)
>   	}
>   }
>   
> -
>   u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
>   {
>   	struct drm_device *dev = adev_to_drm(adev);
> @@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev)
>   	return vrefresh;
>   }
>   
> -union power_info {
> -	struct _ATOM_POWERPLAY_INFO info;
> -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> -};
> -
> -union fan_info {
> -	struct _ATOM_PPLIB_FANTABLE fan;
> -	struct _ATOM_PPLIB_FANTABLE2 fan2;
> -	struct _ATOM_PPLIB_FANTABLE3 fan3;
> -};
> -
> -static int amdgpu_parse_clk_voltage_dep_table(struct amdgpu_clock_voltage_dependency_table *amdgpu_table,
> -					      ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> -{
> -	u32 size = atom_table->ucNumEntries *
> -		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> -	int i;
> -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> -
> -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> -	if (!amdgpu_table->entries)
> -		return -ENOMEM;
> -
> -	entry = &atom_table->entries[0];
> -	for (i = 0; i < atom_table->ucNumEntries; i++) {
> -		amdgpu_table->entries[i].clk = le16_to_cpu(entry->usClockLow) |
> -			(entry->ucClockHigh << 16);
> -		amdgpu_table->entries[i].v = le16_to_cpu(entry->usVoltage);
> -		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record *)
> -			((u8 *)entry + sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> -	}
> -	amdgpu_table->count = atom_table->ucNumEntries;
> -
> -	return 0;
> -}
> -
> -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> -{
> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> -	union power_info *power_info;
> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> -	u16 data_offset;
> -	u8 frev, crev;
> -
> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> -				   &frev, &crev, &data_offset))
> -		return -EINVAL;
> -	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
> -
> -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info->pplib.ulPlatformCaps);
> -	adev->pm.dpm.backbias_response_time = le16_to_cpu(power_info->pplib.usBackbiasTime);
> -	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info->pplib.usVoltageTime);
> -
> -	return 0;
> -}
> -
> -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> -
> -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
> -{
> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> -	union power_info *power_info;
> -	union fan_info *fan_info;
> -	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> -	u16 data_offset;
> -	u8 frev, crev;
> -	int ret, i;
> -
> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> -				   &frev, &crev, &data_offset))
> -		return -EINVAL;
> -	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
> -
> -	/* fan table */
> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> -		if (power_info->pplib3.usFanTableOffset) {
> -			fan_info = (union fan_info *)(mode_info->atom_context->bios + data_offset +
> -						      le16_to_cpu(power_info->pplib3.usFanTableOffset));
> -			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> -			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info->fan.usTMin);
> -			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info->fan.usTMed);
> -			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info->fan.usTHigh);
> -			adev->pm.dpm.fan.pwm_min = le16_to_cpu(fan_info->fan.usPWMMin);
> -			adev->pm.dpm.fan.pwm_med = le16_to_cpu(fan_info->fan.usPWMMed);
> -			adev->pm.dpm.fan.pwm_high = le16_to_cpu(fan_info->fan.usPWMHigh);
> -			if (fan_info->fan.ucFanTableFormat >= 2)
> -				adev->pm.dpm.fan.t_max = le16_to_cpu(fan_info->fan2.usTMax);
> -			else
> -				adev->pm.dpm.fan.t_max = 10900;
> -			adev->pm.dpm.fan.cycle_delay = 100000;
> -			if (fan_info->fan.ucFanTableFormat >= 3) {
> -				adev->pm.dpm.fan.control_mode = fan_info->fan3.ucFanControlMode;
> -				adev->pm.dpm.fan.default_max_fan_pwm =
> -					le16_to_cpu(fan_info->fan3.usFanPWMMax);
> -				adev->pm.dpm.fan.default_fan_output_sensitivity = 4836;
> -				adev->pm.dpm.fan.fan_output_sensitivity =
> -					le16_to_cpu(fan_info->fan3.usFanOutputSensitivity);
> -			}
> -			adev->pm.dpm.fan.ucode_fan_control = true;
> -		}
> -	}
> -
> -	/* clock dependancy tables, shedding tables */
> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> -		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> -			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
> -								 dep_table);
> -			if (ret) {
> -				amdgpu_free_extended_power_table(adev);
> -				return ret;
> -			}
> -		}
> -		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> -			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
> -								 dep_table);
> -			if (ret) {
> -				amdgpu_free_extended_power_table(adev);
> -				return ret;
> -			}
> -		}
> -		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> -			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
> -								 dep_table);
> -			if (ret) {
> -				amdgpu_free_extended_power_table(adev);
> -				return ret;
> -			}
> -		}
> -		if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
> -			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> -								 dep_table);
> -			if (ret) {
> -				amdgpu_free_extended_power_table(adev);
> -				return ret;
> -			}
> -		}
> -		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> -			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> -				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usMaxClockVoltageOnDCOffset));
> -			if (clk_v->ucNumEntries) {
> -				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> -					le16_to_cpu(clk_v->entries[0].usSclkLow) |
> -					(clk_v->entries[0].ucSclkHigh << 16);
> -				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> -					le16_to_cpu(clk_v->entries[0].usMclkLow) |
> -					(clk_v->entries[0].ucMclkHigh << 16);
> -				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> -					le16_to_cpu(clk_v->entries[0].usVddc);
> -				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> -					le16_to_cpu(clk_v->entries[0].usVddci);
> -			}
> -		}
> -		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset) {
> -			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> -				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib4.usVddcPhaseShedLimitsTableOffset));
> -			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> -
> -			adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> -				kcalloc(psl->ucNumEntries,
> -					sizeof(struct amdgpu_phase_shedding_limits_entry),
> -					GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -
> -			entry = &psl->entries[0];
> -			for (i = 0; i < psl->ucNumEntries; i++) {
> -				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> -					le16_to_cpu(entry->usSclkLow) | (entry->ucSclkHigh << 16);
> -				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> -					le16_to_cpu(entry->usMclkLow) | (entry->ucMclkHigh << 16);
> -				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> -					le16_to_cpu(entry->usVoltage);
> -				entry = (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> -			}
> -			adev->pm.dpm.dyn_state.phase_shedding_limits_table.count =
> -				psl->ucNumEntries;
> -		}
> -	}
> -
> -	/* cac data */
> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info->pplib5.ulTDPLimit);
> -		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info->pplib5.ulNearTDPLimit);
> -		adev->pm.dpm.near_tdp_limit_adjusted = adev->pm.dpm.near_tdp_limit;
> -		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info->pplib5.usTDPODLimit);
> -		if (adev->pm.dpm.tdp_od_limit)
> -			adev->pm.dpm.power_control = true;
> -		else
> -			adev->pm.dpm.power_control = false;
> -		adev->pm.dpm.tdp_adjustment = 0;
> -		adev->pm.dpm.sq_ramping_threshold = le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> -		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info->pplib5.ulCACLeakage);
> -		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info->pplib5.usLoadLineSlope);
> -		if (power_info->pplib5.usCACLeakageTableOffset) {
> -			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> -				(ATOM_PPLIB_CAC_Leakage_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(power_info->pplib5.usCACLeakageTableOffset));
> -			ATOM_PPLIB_CAC_Leakage_Record *entry;
> -			u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
> -			adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			entry = &cac_table->entries[0];
> -			for (i = 0; i < cac_table->ucNumEntries; i++) {
> -				if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
> -					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> -						le16_to_cpu(entry->usVddc1);
> -					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> -						le16_to_cpu(entry->usVddc2);
> -					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> -						le16_to_cpu(entry->usVddc3);
> -				} else {
> -					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> -						le16_to_cpu(entry->usVddc);
> -					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> -						le32_to_cpu(entry->ulLeakageValue);
> -				}
> -				entry = (ATOM_PPLIB_CAC_Leakage_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> -			}
> -			adev->pm.dpm.dyn_state.cac_leakage_table.count = cac_table->ucNumEntries;
> -		}
> -	}
> -
> -	/* ext tables */
> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr = (ATOM_PPLIB_EXTENDEDHEADER *)
> -			(mode_info->atom_context->bios + data_offset +
> -			 le16_to_cpu(power_info->pplib3.usExtendendedHeaderOffset));
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> -			ext_hdr->usVCETableOffset) {
> -			VCEClockInfoArray *array = (VCEClockInfoArray *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1);
> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *limits =
> -				(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
> -				 1 + array->ucNumEntries * sizeof(VCEClockInfo));
> -			ATOM_PPLIB_VCE_State_Table *states =
> -				(ATOM_PPLIB_VCE_State_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
> -				 1 + (array->ucNumEntries * sizeof (VCEClockInfo)) +
> -				 1 + (limits->numEntries * sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *entry;
> -			ATOM_PPLIB_VCE_State_Record *state_entry;
> -			VCEClockInfo *vce_clk;
> -			u32 size = limits->numEntries *
> -				sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
> -			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> -				kzalloc(size, GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> -				limits->numEntries;
> -			entry = &limits->entries[0];
> -			state_entry = &states->entries[0];
> -			for (i = 0; i < limits->numEntries; i++) {
> -				vce_clk = (VCEClockInfo *)
> -					((u8 *)&array->entries[0] +
> -					 (entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
> -				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk =
> -					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
> -				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk =
> -					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
> -				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> -					le16_to_cpu(entry->usVoltage);
> -				entry = (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> -			}
> -			adev->pm.dpm.num_of_vce_states =
> -					states->numEntries > AMD_MAX_VCE_LEVELS ?
> -					AMD_MAX_VCE_LEVELS : states->numEntries;
> -			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
> -				vce_clk = (VCEClockInfo *)
> -					((u8 *)&array->entries[0] +
> -					 (state_entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
> -				adev->pm.dpm.vce_states[i].evclk =
> -					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
> -				adev->pm.dpm.vce_states[i].ecclk =
> -					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
> -				adev->pm.dpm.vce_states[i].clk_idx =
> -					state_entry->ucClockInfoIndex & 0x3f;
> -				adev->pm.dpm.vce_states[i].pstate =
> -					(state_entry->ucClockInfoIndex & 0xc0) >> 6;
> -				state_entry = (ATOM_PPLIB_VCE_State_Record *)
> -					((u8 *)state_entry + sizeof(ATOM_PPLIB_VCE_State_Record));
> -			}
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> -			ext_hdr->usUVDTableOffset) {
> -			UVDClockInfoArray *array = (UVDClockInfoArray *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1);
> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *limits =
> -				(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1 +
> -				 1 + (array->ucNumEntries * sizeof (UVDClockInfo)));
> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *entry;
> -			u32 size = limits->numEntries *
> -				sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
> -			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> -				kzalloc(size, GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> -				limits->numEntries;
> -			entry = &limits->entries[0];
> -			for (i = 0; i < limits->numEntries; i++) {
> -				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> -					((u8 *)&array->entries[0] +
> -					 (entry->ucUVDClockInfoIndex * sizeof(UVDClockInfo)));
> -				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> -					le16_to_cpu(uvd_clk->usVClkLow) | (uvd_clk->ucVClkHigh << 16);
> -				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> -					le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16);
> -				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> -					le16_to_cpu(entry->usVoltage);
> -				entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> -			}
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> -			ext_hdr->usSAMUTableOffset) {
> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> -				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usSAMUTableOffset) + 1);
> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> -			u32 size = limits->numEntries *
> -				sizeof(struct amdgpu_clock_voltage_dependency_entry);
> -			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> -				kzalloc(size, GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> -				limits->numEntries;
> -			entry = &limits->entries[0];
> -			for (i = 0; i < limits->numEntries; i++) {
> -				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> -					le16_to_cpu(entry->usSAMClockLow) | (entry->ucSAMClockHigh << 16);
> -				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
> -					le16_to_cpu(entry->usVoltage);
> -				entry = (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> -			}
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> -		    ext_hdr->usPPMTableOffset) {
> -			ATOM_PPLIB_PPM_Table *ppm = (ATOM_PPLIB_PPM_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> -			adev->pm.dpm.dyn_state.ppm_table =
> -				kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.ppm_table) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
> -			adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
> -				le16_to_cpu(ppm->usCpuCoreNumber);
> -			adev->pm.dpm.dyn_state.ppm_table->platform_tdp =
> -				le32_to_cpu(ppm->ulPlatformTDP);
> -			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdp =
> -				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> -			adev->pm.dpm.dyn_state.ppm_table->platform_tdc =
> -				le32_to_cpu(ppm->ulPlatformTDC);
> -			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdc =
> -				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> -			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> -				le32_to_cpu(ppm->ulApuTDP);
> -			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> -				le32_to_cpu(ppm->ulDGpuTDP);
> -			adev->pm.dpm.dyn_state.ppm_table->dgpu_ulv_power =
> -				le32_to_cpu(ppm->ulDGpuUlvPower);
> -			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> -				le32_to_cpu(ppm->ulTjmax);
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> -			ext_hdr->usACPTableOffset) {
> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> -				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usACPTableOffset) + 1);
> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> -			u32 size = limits->numEntries *
> -				sizeof(struct amdgpu_clock_voltage_dependency_entry);
> -			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> -				kzalloc(size, GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> -				limits->numEntries;
> -			entry = &limits->entries[0];
> -			for (i = 0; i < limits->numEntries; i++) {
> -				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
> -					le16_to_cpu(entry->usACPClockLow) | (entry->ucACPClockHigh << 16);
> -				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> -					le16_to_cpu(entry->usVoltage);
> -				entry = (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> -					((u8 *)entry + sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> -			}
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> -			ext_hdr->usPowerTuneTableOffset) {
> -			u8 rev = *(u8 *)(mode_info->atom_context->bios + data_offset +
> -					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> -			ATOM_PowerTune_Table *pt;
> -			adev->pm.dpm.dyn_state.cac_tdp_table =
> -				kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
> -			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> -				amdgpu_free_extended_power_table(adev);
> -				return -ENOMEM;
> -			}
> -			if (rev > 0) {
> -				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> -					(mode_info->atom_context->bios + data_offset +
> -					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> -				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit =
> -					ppt->usMaximumPowerDeliveryLimit;
> -				pt = &ppt->power_tune_table;
> -			} else {
> -				ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
> -					(mode_info->atom_context->bios + data_offset +
> -					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> -				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 255;
> -				pt = &ppt->power_tune_table;
> -			}
> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdp = le16_to_cpu(pt->usTDP);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->configurable_tdp =
> -				le16_to_cpu(pt->usConfigurableTDP);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdc = le16_to_cpu(pt->usTDC);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->battery_power_limit =
> -				le16_to_cpu(pt->usBatteryPowerLimit);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->small_power_limit =
> -				le16_to_cpu(pt->usSmallPowerLimit);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->low_cac_leakage =
> -				le16_to_cpu(pt->usLowCACLeakage);
> -			adev->pm.dpm.dyn_state.cac_tdp_table->high_cac_leakage =
> -				le16_to_cpu(pt->usHighCACLeakage);
> -		}
> -		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> -				ext_hdr->usSclkVddgfxTableOffset) {
> -			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> -				(mode_info->atom_context->bios + data_offset +
> -				 le16_to_cpu(ext_hdr->usSclkVddgfxTableOffset));
> -			ret = amdgpu_parse_clk_voltage_dep_table(
> -					&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> -					dep_table);
> -			if (ret) {
> -				kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> -				return ret;
> -			}
> -		}
> -	}
> -
> -	return 0;
> -}
> -
> -void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
> -{
> -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev->pm.dpm.dyn_state;
> -
> -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> -	kfree(dyn_state->cac_leakage_table.entries);
> -	kfree(dyn_state->phase_shedding_limits_table.entries);
> -	kfree(dyn_state->ppm_table);
> -	kfree(dyn_state->cac_tdp_table);
> -	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> -	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> -	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> -	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> -}
> -
> -static const char *pp_lib_thermal_controller_names[] = {
> -	"NONE",
> -	"lm63",
> -	"adm1032",
> -	"adm1030",
> -	"max6649",
> -	"lm64",
> -	"f75375",
> -	"RV6xx",
> -	"RV770",
> -	"adt7473",
> -	"NONE",
> -	"External GPIO",
> -	"Evergreen",
> -	"emc2103",
> -	"Sumo",
> -	"Northern Islands",
> -	"Southern Islands",
> -	"lm96163",
> -	"Sea Islands",
> -	"Kaveri/Kabini",
> -};
> -
> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> -{
> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> -	ATOM_PPLIB_THERMALCONTROLLER *controller;
> -	struct amdgpu_i2c_bus_rec i2c_bus;
> -	u16 data_offset;
> -	u8 frev, crev;
> -
> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> -				   &frev, &crev, &data_offset))
> -		return;
> -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> -		(mode_info->atom_context->bios + data_offset);
> -	controller = &power_table->sThermalController;
> -
> -	/* add the i2c bus for thermal/fan chip */
> -	if (controller->ucType > 0) {
> -		if (controller->ucFanParameters & ATOM_PP_FANPARAMETERS_NOFAN)
> -			adev->pm.no_fan = true;
> -		adev->pm.fan_pulses_per_revolution =
> -			controller->ucFanParameters & ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
> -		if (adev->pm.fan_pulses_per_revolution) {
> -			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> -			adev->pm.fan_max_rpm = controller->ucFanMaxRPM;
> -		}
> -		if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_RV6XX;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV770) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_RV770;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_EVERGREEN;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SUMO) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_SUMO;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_KAVERI) {
> -			DRM_INFO("Internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> -		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> -			DRM_INFO("External GPIO thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
> -		} else if (controller->ucType ==
> -			   ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> -			DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> -		} else if (controller->ucType ==
> -			   ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> -			DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> -		} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> -			DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
> -				 pp_lib_thermal_controller_names[controller->ucType],
> -				 controller->ucI2cAddress >> 1,
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
> -			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev, controller->ucI2cLine);
> -			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev, &i2c_bus);
> -			if (adev->pm.i2c_bus) {
> -				struct i2c_board_info info = { };
> -				const char *name = pp_lib_thermal_controller_names[controller->ucType];
> -				info.addr = controller->ucI2cAddress >> 1;
> -				strlcpy(info.type, name, sizeof(info.type));
> -				i2c_new_client_device(&adev->pm.i2c_bus->adapter, &info);
> -			}
> -		} else {
> -			DRM_INFO("Unknown thermal controller type %d at 0x%02x %s fan control\n",
> -				 controller->ucType,
> -				 controller->ucI2cAddress >> 1,
> -				 (controller->ucFanParameters &
> -				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> -		}
> -	}
> -}
> -
> -struct amd_vce_state*
> -amdgpu_get_vce_clock_state(void *handle, u32 idx)
> -{
> -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -
> -	if (idx < adev->pm.dpm.num_of_vce_states)
> -		return &adev->pm.dpm.vce_states[idx];
> -
> -	return NULL;
> -}
> -
>   int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
>   {
>   	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> @@ -1243,211 +465,6 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
>   	amdgpu_pm_compute_clocks(adev);
>   }
>   
> -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
> -						     enum amd_pm_state_type dpm_state)
> -{
> -	int i;
> -	struct amdgpu_ps *ps;
> -	u32 ui_class;
> -	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> -		true : false;
> -
> -	/* check if the vblank period is too short to adjust the mclk */
> -	if (single_display && adev->powerplay.pp_funcs->vblank_too_short) {
> -		if (amdgpu_dpm_vblank_too_short(adev))
> -			single_display = false;
> -	}
> -
> -	/* certain older asics have a separare 3D performance state,
> -	 * so try that first if the user selected performance
> -	 */
> -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> -		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> -	/* balanced states don't exist at the moment */
> -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> -
> -restart_search:
> -	/* Pick the best power state based on current conditions */
> -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> -		ps = &adev->pm.dpm.ps[i];
> -		ui_class = ps->class & ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> -		switch (dpm_state) {
> -		/* user states */
> -		case POWER_STATE_TYPE_BATTERY:
> -			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> -				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> -					if (single_display)
> -						return ps;
> -				} else
> -					return ps;
> -			}
> -			break;
> -		case POWER_STATE_TYPE_BALANCED:
> -			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> -				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> -					if (single_display)
> -						return ps;
> -				} else
> -					return ps;
> -			}
> -			break;
> -		case POWER_STATE_TYPE_PERFORMANCE:
> -			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> -				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> -					if (single_display)
> -						return ps;
> -				} else
> -					return ps;
> -			}
> -			break;
> -		/* internal states */
> -		case POWER_STATE_TYPE_INTERNAL_UVD:
> -			if (adev->pm.dpm.uvd_ps)
> -				return adev->pm.dpm.uvd_ps;
> -			else
> -				break;
> -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_BOOT:
> -			return adev->pm.dpm.boot_ps;
> -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_ACPI:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_ULV:
> -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> -				return ps;
> -			break;
> -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> -				return ps;
> -			break;
> -		default:
> -			break;
> -		}
> -	}
> -	/* use a fallback state if we didn't match */
> -	switch (dpm_state) {
> -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> -		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> -		goto restart_search;
> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> -		if (adev->pm.dpm.uvd_ps) {
> -			return adev->pm.dpm.uvd_ps;
> -		} else {
> -			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> -			goto restart_search;
> -		}
> -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> -		goto restart_search;
> -	case POWER_STATE_TYPE_INTERNAL_ACPI:
> -		dpm_state = POWER_STATE_TYPE_BATTERY;
> -		goto restart_search;
> -	case POWER_STATE_TYPE_BATTERY:
> -	case POWER_STATE_TYPE_BALANCED:
> -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> -		goto restart_search;
> -	default:
> -		break;
> -	}
> -
> -	return NULL;
> -}
> -
> -static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
> -{
> -	struct amdgpu_ps *ps;
> -	enum amd_pm_state_type dpm_state;
> -	int ret;
> -	bool equal = false;
> -
> -	/* if dpm init failed */
> -	if (!adev->pm.dpm_enabled)
> -		return;
> -
> -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> -		/* add other state override checks here */
> -		if ((!adev->pm.dpm.thermal_active) &&
> -		    (!adev->pm.dpm.uvd_active))
> -			adev->pm.dpm.state = adev->pm.dpm.user_state;
> -	}
> -	dpm_state = adev->pm.dpm.state;
> -
> -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> -	if (ps)
> -		adev->pm.dpm.requested_ps = ps;
> -	else
> -		return;
> -
> -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs->print_power_state) {
> -		printk("switching from power state:\n");
> -		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.current_ps);
> -		printk("switching to power state:\n");
> -		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.requested_ps);
> -	}
> -
> -	/* update whether vce is active */
> -	ps->vce_active = adev->pm.dpm.vce_active;
> -	if (adev->powerplay.pp_funcs->display_configuration_changed)
> -		amdgpu_dpm_display_configuration_changed(adev);
> -
> -	ret = amdgpu_dpm_pre_set_power_state(adev);
> -	if (ret)
> -		return;
> -
> -	if (adev->powerplay.pp_funcs->check_state_equal) {
> -		if (0 != amdgpu_dpm_check_state_equal(adev, adev->pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> -			equal = false;
> -	}
> -
> -	if (equal)
> -		return;
> -
> -	if (adev->powerplay.pp_funcs->set_power_state)
> -		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
> -
> -	amdgpu_dpm_post_set_power_state(adev);
> -
> -	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
> -	adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
> -
> -	if (adev->powerplay.pp_funcs->force_performance_level) {
> -		if (adev->pm.dpm.thermal_active) {
> -			enum amd_dpm_forced_level level = adev->pm.dpm.forced_level;
> -			/* force low perf level for thermal */
> -			amdgpu_dpm_force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
> -			/* save the user's level */
> -			adev->pm.dpm.forced_level = level;
> -		} else {
> -			/* otherwise, user selected level */
> -			amdgpu_dpm_force_performance_level(adev, adev->pm.dpm.forced_level);
> -		}
> -	}
> -}
> -
>   void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
>   {

Rename to amdgpu_dpm_compute_clocks?

>   	int i = 0;
> @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
>   			amdgpu_fence_wait_empty(ring);
>   	}
>   
> -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
> +	if ((adev->family == AMDGPU_FAMILY_SI) ||
> +	     (adev->family == AMDGPU_FAMILY_KV)) {
> +		amdgpu_dpm_get_active_displays(adev);
> +		adev->powerplay.pp_funcs->change_power_state(adev->powerplay.pp_handle);

It would be clearer if the newly added logic in this function is in 
another patch. This does more than what the patch subject says.

> +	} else {
>   		if (!amdgpu_device_has_dc_support(adev)) {
> -			mutex_lock(&adev->pm.mutex);
>   			amdgpu_dpm_get_active_displays(adev);
>   			adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count;
>   			adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev);
> @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
>   				adev->powerplay.pp_funcs->display_configuration_change(
>   							adev->powerplay.pp_handle,
>   							&adev->pm.pm_display_cfg);
> -			mutex_unlock(&adev->pm.mutex);
>   		}
>   		amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
> -	} else {
> -		mutex_lock(&adev->pm.mutex);
> -		amdgpu_dpm_get_active_displays(adev);
> -		amdgpu_dpm_change_power_state_locked(adev);
> -		mutex_unlock(&adev->pm.mutex);
>   	}
>   }
>   
> @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
>   	}
>   }
>   
> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> -{
> -	int i;
> -
> -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> -		return;
> -
> -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> -		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
> -
> -}
> -
>   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
>   {
>   	int ret = 0;
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> index 01120b302590..295d2902aef7 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> @@ -366,24 +366,10 @@ enum amdgpu_display_gap
>       AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
>   };
>   
> -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> -void amdgpu_dpm_print_cap_info(u32 caps);
> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> -				struct amdgpu_ps *rps);
>   u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
>   int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum amd_pp_sensors sensor,
>   			   void *data, uint32_t *size);
>   
> -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> -
> -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
> -void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
> -
> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> -
> -struct amd_vce_state*
> -amdgpu_get_vce_clock_state(void *handle, u32 idx);
> -
>   int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device *adev,
>   				      uint32_t block_type, bool gate);
>   
> @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev);
>   void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable);
>   void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
>   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>   int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version);
>   int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable);
>   int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size);
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> index 0fb114adc79f..614d8b6a58ad 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
>   
>   include $(AMD_POWERPLAY)
>   
> -POWER_MGR-y = amd_powerplay.o
> +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
>   
>   POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
>   
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> index 380a5336c74f..90f4c65659e2 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> @@ -36,6 +36,7 @@
>   
>   #include "gca/gfx_7_2_d.h"
>   #include "gca/gfx_7_2_sh_mask.h"
> +#include "legacy_dpm.h"
>   
>   #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
>   #define KV_MINIMUM_ENGINE_CLOCK         800
> @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs kv_dpm_funcs = {
>   	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>   	.check_state_equal = kv_check_state_equal,
>   	.read_sensor = &kv_dpm_read_sensor,
> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>   };
>   
>   static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c

This could get confused with all APIs that support legacy dpms. This 
file has only a subset of APIs to support legacy dpm. Needs a better 
name - powerplay_ctrl/powerplay_util ?

Thanks,
Lijo

> new file mode 100644
> index 000000000000..9427c1026e1d
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> @@ -0,0 +1,1453 @@
> +/*
> + * Copyright 2021 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include "amdgpu.h"
> +#include "amdgpu_atombios.h"
> +#include "amdgpu_i2c.h"
> +#include "atom.h"
> +#include "amd_pcie.h"
> +#include "legacy_dpm.h"
> +
> +#define amdgpu_dpm_pre_set_power_state(adev) \
> +		((adev)->powerplay.pp_funcs->pre_set_power_state((adev)->powerplay.pp_handle))
> +
> +#define amdgpu_dpm_post_set_power_state(adev) \
> +		((adev)->powerplay.pp_funcs->post_set_power_state((adev)->powerplay.pp_handle))
> +
> +#define amdgpu_dpm_display_configuration_changed(adev) \
> +		((adev)->powerplay.pp_funcs->display_configuration_changed((adev)->powerplay.pp_handle))
> +
> +#define amdgpu_dpm_print_power_state(adev, ps) \
> +		((adev)->powerplay.pp_funcs->print_power_state((adev)->powerplay.pp_handle, (ps)))
> +
> +#define amdgpu_dpm_vblank_too_short(adev) \
> +		((adev)->powerplay.pp_funcs->vblank_too_short((adev)->powerplay.pp_handle))
> +
> +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> +		((adev)->powerplay.pp_funcs->check_state_equal((adev)->powerplay.pp_handle, (cps), (rps), (equal)))
> +
> +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
> +					    u32 clock,
> +					    bool strobe_mode,
> +					    struct atom_mpll_param *mpll_param)
> +{
> +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> +	int index = GetIndexIntoMasterTable(COMMAND, ComputeMemoryClockParam);
> +	u8 frev, crev;
> +
> +	memset(&args, 0, sizeof(args));
> +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> +
> +	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
> +		return -EINVAL;
> +
> +	switch (frev) {
> +	case 2:
> +		switch (crev) {
> +		case 1:
> +			/* SI */
> +			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> +			args.ucInputFlag = 0;
> +			if (strobe_mode)
> +				args.ucInputFlag |= MPLL_INPUT_FLAG_STROBE_MODE_EN;
> +
> +			amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> +
> +			mpll_param->clkfrac = le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> +			mpll_param->clkf = le16_to_cpu(args.ulFbDiv.usFbDiv);
> +			mpll_param->post_div = args.ucPostDiv;
> +			mpll_param->dll_speed = args.ucDllSpeed;
> +			mpll_param->bwcntl = args.ucBWCntl;
> +			mpll_param->vco_mode =
> +				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_VCO_MODE_MASK);
> +			mpll_param->yclk_sel =
> +				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> +			mpll_param->qdr =
> +				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> +			mpll_param->half_rate =
> +				(args.ucPllCntlFlag & MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> +			break;
> +		default:
> +			return -EINVAL;
> +		}
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
> +void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
> +					     u32 eng_clock, u32 mem_clock)
> +{
> +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> +	int index = GetIndexIntoMasterTable(COMMAND, DynamicMemorySettings);
> +	u32 tmp;
> +
> +	memset(&args, 0, sizeof(args));
> +
> +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> +
> +	args.ulTargetEngineClock = cpu_to_le32(tmp);
> +	if (mem_clock)
> +		args.sReserved.ulClock = cpu_to_le32(mem_clock & SET_CLOCK_FREQ_MASK);
> +
> +	amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> +}
> +
> +union firmware_info {
> +	ATOM_FIRMWARE_INFO info;
> +	ATOM_FIRMWARE_INFO_V1_2 info_12;
> +	ATOM_FIRMWARE_INFO_V1_3 info_13;
> +	ATOM_FIRMWARE_INFO_V1_4 info_14;
> +	ATOM_FIRMWARE_INFO_V2_1 info_21;
> +	ATOM_FIRMWARE_INFO_V2_2 info_22;
> +};
> +
> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
> +					  u16 *vddc, u16 *vddci, u16 *mvdd)
> +{
> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> +	u8 frev, crev;
> +	u16 data_offset;
> +	union firmware_info *firmware_info;
> +
> +	*vddc = 0;
> +	*vddci = 0;
> +	*mvdd = 0;
> +
> +	if (amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> +				   &frev, &crev, &data_offset)) {
> +		firmware_info =
> +			(union firmware_info *)(mode_info->atom_context->bios +
> +						data_offset);
> +		*vddc = le16_to_cpu(firmware_info->info_14.usBootUpVDDCVoltage);
> +		if ((frev == 2) && (crev >= 2)) {
> +			*vddci = le16_to_cpu(firmware_info->info_22.usBootUpVDDCIVoltage);
> +			*mvdd = le16_to_cpu(firmware_info->info_22.usBootUpMVDDCVoltage);
> +		}
> +	}
> +}
> +
> +union set_voltage {
> +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> +	struct _SET_VOLTAGE_PARAMETERS v1;
> +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> +};
> +
> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
> +			     u16 voltage_id, u16 *voltage)
> +{
> +	union set_voltage args;
> +	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> +	u8 frev, crev;
> +
> +	if (!amdgpu_atom_parse_cmd_header(adev->mode_info.atom_context, index, &frev, &crev))
> +		return -EINVAL;
> +
> +	switch (crev) {
> +	case 1:
> +		return -EINVAL;
> +	case 2:
> +		args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE;
> +		args.v2.ucVoltageMode = 0;
> +		args.v2.usVoltageLevel = 0;
> +
> +		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> +
> +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> +		break;
> +	case 3:
> +		args.v3.ucVoltageType = voltage_type;
> +		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> +
> +		amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
> +
> +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> +		break;
> +	default:
> +		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
> +						      u16 *voltage,
> +						      u16 leakage_idx)
> +{
> +	return amdgpu_atombios_get_max_vddc(adev, VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> +}
> +
> +union voltage_object_info {
> +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> +};
> +
> +union voltage_object {
> +	struct _ATOM_VOLTAGE_OBJECT v1;
> +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> +};
> +
> +static ATOM_VOLTAGE_OBJECT_V3 *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_INFO_V3_1 *v3,
> +									u8 voltage_type, u8 voltage_mode)
> +{
> +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1, asVoltageObj[0]);
> +	u8 *start = (u8 *)v3;
> +
> +	while (offset < size) {
> +		ATOM_VOLTAGE_OBJECT_V3 *vo = (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType == voltage_type) &&
> +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode == voltage_mode))
> +			return vo;
> +		offset += le16_to_cpu(vo->asGpioVoltageObj.sHeader.usSize);
> +	}
> +	return NULL;
> +}
> +
> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> +			      u8 voltage_type,
> +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> +{
> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> +	u8 frev, crev;
> +	u16 data_offset, size;
> +	union voltage_object_info *voltage_info;
> +	union voltage_object *voltage_object = NULL;
> +
> +	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> +				   &frev, &crev, &data_offset)) {
> +		voltage_info = (union voltage_object_info *)
> +			(adev->mode_info.atom_context->bios + data_offset);
> +
> +		switch (frev) {
> +		case 3:
> +			switch (crev) {
> +			case 1:
> +				voltage_object = (union voltage_object *)
> +					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> +								      voltage_type,
> +								      VOLTAGE_OBJ_SVID2);
> +				if (voltage_object) {
> +					*svd_gpio_id = voltage_object->v3.asSVID2Obj.ucSVDGpioId;
> +					*svc_gpio_id = voltage_object->v3.asSVID2Obj.ucSVCGpioId;
> +				} else {
> +					return -EINVAL;
> +				}
> +				break;
> +			default:
> +				DRM_ERROR("unknown voltage object table\n");
> +				return -EINVAL;
> +			}
> +			break;
> +		default:
> +			DRM_ERROR("unknown voltage object table\n");
> +			return -EINVAL;
> +		}
> +
> +	}
> +	return 0;
> +}
> +
> +bool
> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> +				u8 voltage_type, u8 voltage_mode)
> +{
> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> +	u8 frev, crev;
> +	u16 data_offset, size;
> +	union voltage_object_info *voltage_info;
> +
> +	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> +				   &frev, &crev, &data_offset)) {
> +		voltage_info = (union voltage_object_info *)
> +			(adev->mode_info.atom_context->bios + data_offset);
> +
> +		switch (frev) {
> +		case 3:
> +			switch (crev) {
> +			case 1:
> +				if (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> +								  voltage_type, voltage_mode))
> +					return true;
> +				break;
> +			default:
> +				DRM_ERROR("unknown voltage object table\n");
> +				return false;
> +			}
> +			break;
> +		default:
> +			DRM_ERROR("unknown voltage object table\n");
> +			return false;
> +		}
> +
> +	}
> +	return false;
> +}
> +
> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> +				      u8 voltage_type, u8 voltage_mode,
> +				      struct atom_voltage_table *voltage_table)
> +{
> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> +	u8 frev, crev;
> +	u16 data_offset, size;
> +	int i;
> +	union voltage_object_info *voltage_info;
> +	union voltage_object *voltage_object = NULL;
> +
> +	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> +				   &frev, &crev, &data_offset)) {
> +		voltage_info = (union voltage_object_info *)
> +			(adev->mode_info.atom_context->bios + data_offset);
> +
> +		switch (frev) {
> +		case 3:
> +			switch (crev) {
> +			case 1:
> +				voltage_object = (union voltage_object *)
> +					amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> +								      voltage_type, voltage_mode);
> +				if (voltage_object) {
> +					ATOM_GPIO_VOLTAGE_OBJECT_V3 *gpio =
> +						&voltage_object->v3.asGpioVoltageObj;
> +					VOLTAGE_LUT_ENTRY_V2 *lut;
> +					if (gpio->ucGpioEntryNum > MAX_VOLTAGE_ENTRIES)
> +						return -EINVAL;
> +					lut = &gpio->asVolGpioLut[0];
> +					for (i = 0; i < gpio->ucGpioEntryNum; i++) {
> +						voltage_table->entries[i].value =
> +							le16_to_cpu(lut->usVoltageValue);
> +						voltage_table->entries[i].smio_low =
> +							le32_to_cpu(lut->ulVoltageId);
> +						lut = (VOLTAGE_LUT_ENTRY_V2 *)
> +							((u8 *)lut + sizeof(VOLTAGE_LUT_ENTRY_V2));
> +					}
> +					voltage_table->mask_low = le32_to_cpu(gpio->ulGpioMaskVal);
> +					voltage_table->count = gpio->ucGpioEntryNum;
> +					voltage_table->phase_delay = gpio->ucPhaseDelay;
> +					return 0;
> +				}
> +				break;
> +			default:
> +				DRM_ERROR("unknown voltage object table\n");
> +				return -EINVAL;
> +			}
> +			break;
> +		default:
> +			DRM_ERROR("unknown voltage object table\n");
> +			return -EINVAL;
> +		}
> +	}
> +	return -EINVAL;
> +}
> +
> +union vram_info {
> +	struct _ATOM_VRAM_INFO_V3 v1_3;
> +	struct _ATOM_VRAM_INFO_V4 v1_4;
> +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> +};
> +
> +#define MEM_ID_MASK           0xff000000
> +#define MEM_ID_SHIFT          24
> +#define CLOCK_RANGE_MASK      0x00ffffff
> +#define CLOCK_RANGE_SHIFT     0
> +#define LOW_NIBBLE_MASK       0xf
> +#define DATA_EQU_PREV         0
> +#define DATA_FROM_TABLE       4
> +
> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> +				      u8 module_index,
> +				      struct atom_mc_reg_table *reg_table)
> +{
> +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> +	u32 i = 0, j;
> +	u16 data_offset, size;
> +	union vram_info *vram_info;
> +
> +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> +
> +	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, &size,
> +				   &frev, &crev, &data_offset)) {
> +		vram_info = (union vram_info *)
> +			(adev->mode_info.atom_context->bios + data_offset);
> +		switch (frev) {
> +		case 1:
> +			DRM_ERROR("old table version %d, %d\n", frev, crev);
> +			return -EINVAL;
> +		case 2:
> +			switch (crev) {
> +			case 1:
> +				if (module_index < vram_info->v2_1.ucNumOfVRAMModule) {
> +					ATOM_INIT_REG_BLOCK *reg_block =
> +						(ATOM_INIT_REG_BLOCK *)
> +						((u8 *)vram_info + le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> +					ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> +						(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> +						((u8 *)reg_block + (2 * sizeof(u16)) +
> +						 le16_to_cpu(reg_block->usRegIndexTblSize));
> +					ATOM_INIT_REG_INDEX_FORMAT *format = &reg_block->asRegIndexBuf[0];
> +					num_entries = (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> +							   sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> +					if (num_entries > VBIOS_MC_REGISTER_ARRAY_SIZE)
> +						return -EINVAL;
> +					while (i < num_entries) {
> +						if (format->ucPreRegDataLength & ACCESS_PLACEHOLDER)
> +							break;
> +						reg_table->mc_reg_address[i].s1 =
> +							(u16)(le16_to_cpu(format->usRegIndex));
> +						reg_table->mc_reg_address[i].pre_reg_data =
> +							(u8)(format->ucPreRegDataLength);
> +						i++;
> +						format = (ATOM_INIT_REG_INDEX_FORMAT *)
> +							((u8 *)format + sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> +					}
> +					reg_table->last = i;
> +					while ((le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> +					       (num_ranges < VBIOS_MAX_AC_TIMING_ENTRIES)) {
> +						t_mem_id = (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> +								>> MEM_ID_SHIFT);
> +						if (module_index == t_mem_id) {
> +							reg_table->mc_reg_table_entry[num_ranges].mclk_max =
> +								(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> +								      >> CLOCK_RANGE_SHIFT);
> +							for (i = 0, j = 1; i < reg_table->last; i++) {
> +								if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_FROM_TABLE) {
> +									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> +										(u32)le32_to_cpu(*((u32 *)reg_data + j));
> +									j++;
> +								} else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) {
> +									reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> +										reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> +								}
> +							}
> +							num_ranges++;
> +						}
> +						reg_data = (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> +							((u8 *)reg_data + le16_to_cpu(reg_block->usRegDataBlkSize));
> +					}
> +					if (le32_to_cpu(*(u32 *)reg_data) != END_OF_REG_DATA_BLOCK)
> +						return -EINVAL;
> +					reg_table->num_entries = num_ranges;
> +				} else
> +					return -EINVAL;
> +				break;
> +			default:
> +				DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> +				return -EINVAL;
> +			}
> +			break;
> +		default:
> +			DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> +			return -EINVAL;
> +		}
> +		return 0;
> +	}
> +	return -EINVAL;
> +}
> +
> +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> +{
> +	const char *s;
> +
> +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> +	default:
> +		s = "none";
> +		break;
> +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> +		s = "battery";
> +		break;
> +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> +		s = "balanced";
> +		break;
> +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> +		s = "performance";
> +		break;
> +	}
> +	printk("\tui class: %s\n", s);
> +	printk("\tinternal class:");
> +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> +	    (class2 == 0))
> +		pr_cont(" none");
> +	else {
> +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> +			pr_cont(" boot");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> +			pr_cont(" thermal");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> +			pr_cont(" limited_pwr");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> +			pr_cont(" rest");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> +			pr_cont(" forced");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> +			pr_cont(" 3d_perf");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> +			pr_cont(" ovrdrv");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> +			pr_cont(" uvd");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> +			pr_cont(" 3d_low");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> +			pr_cont(" acpi");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> +			pr_cont(" uvd_hd2");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> +			pr_cont(" uvd_hd");
> +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> +			pr_cont(" uvd_sd");
> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> +			pr_cont(" limited_pwr2");
> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> +			pr_cont(" ulv");
> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> +			pr_cont(" uvd_mvc");
> +	}
> +	pr_cont("\n");
> +}
> +
> +void amdgpu_dpm_print_cap_info(u32 caps)
> +{
> +	printk("\tcaps:");
> +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> +		pr_cont(" single_disp");
> +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> +		pr_cont(" video");
> +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> +		pr_cont(" no_dc");
> +	pr_cont("\n");
> +}
> +
> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> +				struct amdgpu_ps *rps)
> +{
> +	printk("\tstatus:");
> +	if (rps == adev->pm.dpm.current_ps)
> +		pr_cont(" c");
> +	if (rps == adev->pm.dpm.requested_ps)
> +		pr_cont(" r");
> +	if (rps == adev->pm.dpm.boot_ps)
> +		pr_cont(" b");
> +	pr_cont("\n");
> +}
> +
> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> +{
> +	int i;
> +
> +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> +		return;
> +
> +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> +		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
> +
> +}
> +
> +union power_info {
> +	struct _ATOM_POWERPLAY_INFO info;
> +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> +};
> +
> +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> +	union power_info *power_info;
> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> +	u16 data_offset;
> +	u8 frev, crev;
> +
> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> +				   &frev, &crev, &data_offset))
> +		return -EINVAL;
> +	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
> +
> +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info->pplib.ulPlatformCaps);
> +	adev->pm.dpm.backbias_response_time = le16_to_cpu(power_info->pplib.usBackbiasTime);
> +	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info->pplib.usVoltageTime);
> +
> +	return 0;
> +}
> +
> +union fan_info {
> +	struct _ATOM_PPLIB_FANTABLE fan;
> +	struct _ATOM_PPLIB_FANTABLE2 fan2;
> +	struct _ATOM_PPLIB_FANTABLE3 fan3;
> +};
> +
> +static int amdgpu_parse_clk_voltage_dep_table(struct amdgpu_clock_voltage_dependency_table *amdgpu_table,
> +					      ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> +{
> +	u32 size = atom_table->ucNumEntries *
> +		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> +	int i;
> +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> +
> +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> +	if (!amdgpu_table->entries)
> +		return -ENOMEM;
> +
> +	entry = &atom_table->entries[0];
> +	for (i = 0; i < atom_table->ucNumEntries; i++) {
> +		amdgpu_table->entries[i].clk = le16_to_cpu(entry->usClockLow) |
> +			(entry->ucClockHigh << 16);
> +		amdgpu_table->entries[i].v = le16_to_cpu(entry->usVoltage);
> +		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record *)
> +			((u8 *)entry + sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> +	}
> +	amdgpu_table->count = atom_table->ucNumEntries;
> +
> +	return 0;
> +}
> +
> +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> +
> +int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> +	union power_info *power_info;
> +	union fan_info *fan_info;
> +	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> +	u16 data_offset;
> +	u8 frev, crev;
> +	int ret, i;
> +
> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> +				   &frev, &crev, &data_offset))
> +		return -EINVAL;
> +	power_info = (union power_info *)(mode_info->atom_context->bios + data_offset);
> +
> +	/* fan table */
> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> +		if (power_info->pplib3.usFanTableOffset) {
> +			fan_info = (union fan_info *)(mode_info->atom_context->bios + data_offset +
> +						      le16_to_cpu(power_info->pplib3.usFanTableOffset));
> +			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> +			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info->fan.usTMin);
> +			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info->fan.usTMed);
> +			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info->fan.usTHigh);
> +			adev->pm.dpm.fan.pwm_min = le16_to_cpu(fan_info->fan.usPWMMin);
> +			adev->pm.dpm.fan.pwm_med = le16_to_cpu(fan_info->fan.usPWMMed);
> +			adev->pm.dpm.fan.pwm_high = le16_to_cpu(fan_info->fan.usPWMHigh);
> +			if (fan_info->fan.ucFanTableFormat >= 2)
> +				adev->pm.dpm.fan.t_max = le16_to_cpu(fan_info->fan2.usTMax);
> +			else
> +				adev->pm.dpm.fan.t_max = 10900;
> +			adev->pm.dpm.fan.cycle_delay = 100000;
> +			if (fan_info->fan.ucFanTableFormat >= 3) {
> +				adev->pm.dpm.fan.control_mode = fan_info->fan3.ucFanControlMode;
> +				adev->pm.dpm.fan.default_max_fan_pwm =
> +					le16_to_cpu(fan_info->fan3.usFanPWMMax);
> +				adev->pm.dpm.fan.default_fan_output_sensitivity = 4836;
> +				adev->pm.dpm.fan.fan_output_sensitivity =
> +					le16_to_cpu(fan_info->fan3.usFanOutputSensitivity);
> +			}
> +			adev->pm.dpm.fan.ucode_fan_control = true;
> +		}
> +	}
> +
> +	/* clock dependancy tables, shedding tables */
> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> +		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> +			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
> +								 dep_table);
> +			if (ret) {
> +				amdgpu_free_extended_power_table(adev);
> +				return ret;
> +			}
> +		}
> +		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> +			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
> +								 dep_table);
> +			if (ret) {
> +				amdgpu_free_extended_power_table(adev);
> +				return ret;
> +			}
> +		}
> +		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> +			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
> +								 dep_table);
> +			if (ret) {
> +				amdgpu_free_extended_power_table(adev);
> +				return ret;
> +			}
> +		}
> +		if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
> +			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> +								 dep_table);
> +			if (ret) {
> +				amdgpu_free_extended_power_table(adev);
> +				return ret;
> +			}
> +		}
> +		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> +			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> +				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usMaxClockVoltageOnDCOffset));
> +			if (clk_v->ucNumEntries) {
> +				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> +					le16_to_cpu(clk_v->entries[0].usSclkLow) |
> +					(clk_v->entries[0].ucSclkHigh << 16);
> +				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> +					le16_to_cpu(clk_v->entries[0].usMclkLow) |
> +					(clk_v->entries[0].ucMclkHigh << 16);
> +				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> +					le16_to_cpu(clk_v->entries[0].usVddc);
> +				adev->pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> +					le16_to_cpu(clk_v->entries[0].usVddci);
> +			}
> +		}
> +		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset) {
> +			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> +				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib4.usVddcPhaseShedLimitsTableOffset));
> +			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> +
> +			adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> +				kcalloc(psl->ucNumEntries,
> +					sizeof(struct amdgpu_phase_shedding_limits_entry),
> +					GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +
> +			entry = &psl->entries[0];
> +			for (i = 0; i < psl->ucNumEntries; i++) {
> +				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> +					le16_to_cpu(entry->usSclkLow) | (entry->ucSclkHigh << 16);
> +				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> +					le16_to_cpu(entry->usMclkLow) | (entry->ucMclkHigh << 16);
> +				adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> +					le16_to_cpu(entry->usVoltage);
> +				entry = (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> +			}
> +			adev->pm.dpm.dyn_state.phase_shedding_limits_table.count =
> +				psl->ucNumEntries;
> +		}
> +	}
> +
> +	/* cac data */
> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info->pplib5.ulTDPLimit);
> +		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info->pplib5.ulNearTDPLimit);
> +		adev->pm.dpm.near_tdp_limit_adjusted = adev->pm.dpm.near_tdp_limit;
> +		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info->pplib5.usTDPODLimit);
> +		if (adev->pm.dpm.tdp_od_limit)
> +			adev->pm.dpm.power_control = true;
> +		else
> +			adev->pm.dpm.power_control = false;
> +		adev->pm.dpm.tdp_adjustment = 0;
> +		adev->pm.dpm.sq_ramping_threshold = le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> +		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info->pplib5.ulCACLeakage);
> +		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info->pplib5.usLoadLineSlope);
> +		if (power_info->pplib5.usCACLeakageTableOffset) {
> +			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> +				(ATOM_PPLIB_CAC_Leakage_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(power_info->pplib5.usCACLeakageTableOffset));
> +			ATOM_PPLIB_CAC_Leakage_Record *entry;
> +			u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
> +			adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			entry = &cac_table->entries[0];
> +			for (i = 0; i < cac_table->ucNumEntries; i++) {
> +				if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
> +					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> +						le16_to_cpu(entry->usVddc1);
> +					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> +						le16_to_cpu(entry->usVddc2);
> +					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> +						le16_to_cpu(entry->usVddc3);
> +				} else {
> +					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> +						le16_to_cpu(entry->usVddc);
> +					adev->pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> +						le32_to_cpu(entry->ulLeakageValue);
> +				}
> +				entry = (ATOM_PPLIB_CAC_Leakage_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> +			}
> +			adev->pm.dpm.dyn_state.cac_leakage_table.count = cac_table->ucNumEntries;
> +		}
> +	}
> +
> +	/* ext tables */
> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr = (ATOM_PPLIB_EXTENDEDHEADER *)
> +			(mode_info->atom_context->bios + data_offset +
> +			 le16_to_cpu(power_info->pplib3.usExtendendedHeaderOffset));
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> +			ext_hdr->usVCETableOffset) {
> +			VCEClockInfoArray *array = (VCEClockInfoArray *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1);
> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *limits =
> +				(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
> +				 1 + array->ucNumEntries * sizeof(VCEClockInfo));
> +			ATOM_PPLIB_VCE_State_Table *states =
> +				(ATOM_PPLIB_VCE_State_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usVCETableOffset) + 1 +
> +				 1 + (array->ucNumEntries * sizeof (VCEClockInfo)) +
> +				 1 + (limits->numEntries * sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *entry;
> +			ATOM_PPLIB_VCE_State_Record *state_entry;
> +			VCEClockInfo *vce_clk;
> +			u32 size = limits->numEntries *
> +				sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
> +			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> +				kzalloc(size, GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> +				limits->numEntries;
> +			entry = &limits->entries[0];
> +			state_entry = &states->entries[0];
> +			for (i = 0; i < limits->numEntries; i++) {
> +				vce_clk = (VCEClockInfo *)
> +					((u8 *)&array->entries[0] +
> +					 (entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
> +				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk =
> +					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
> +				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk =
> +					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
> +				adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> +					le16_to_cpu(entry->usVoltage);
> +				entry = (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> +			}
> +			adev->pm.dpm.num_of_vce_states =
> +					states->numEntries > AMD_MAX_VCE_LEVELS ?
> +					AMD_MAX_VCE_LEVELS : states->numEntries;
> +			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
> +				vce_clk = (VCEClockInfo *)
> +					((u8 *)&array->entries[0] +
> +					 (state_entry->ucVCEClockInfoIndex * sizeof(VCEClockInfo)));
> +				adev->pm.dpm.vce_states[i].evclk =
> +					le16_to_cpu(vce_clk->usEVClkLow) | (vce_clk->ucEVClkHigh << 16);
> +				adev->pm.dpm.vce_states[i].ecclk =
> +					le16_to_cpu(vce_clk->usECClkLow) | (vce_clk->ucECClkHigh << 16);
> +				adev->pm.dpm.vce_states[i].clk_idx =
> +					state_entry->ucClockInfoIndex & 0x3f;
> +				adev->pm.dpm.vce_states[i].pstate =
> +					(state_entry->ucClockInfoIndex & 0xc0) >> 6;
> +				state_entry = (ATOM_PPLIB_VCE_State_Record *)
> +					((u8 *)state_entry + sizeof(ATOM_PPLIB_VCE_State_Record));
> +			}
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> +			ext_hdr->usUVDTableOffset) {
> +			UVDClockInfoArray *array = (UVDClockInfoArray *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1);
> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *limits =
> +				(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) + 1 +
> +				 1 + (array->ucNumEntries * sizeof (UVDClockInfo)));
> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *entry;
> +			u32 size = limits->numEntries *
> +				sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
> +			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> +				kzalloc(size, GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> +				limits->numEntries;
> +			entry = &limits->entries[0];
> +			for (i = 0; i < limits->numEntries; i++) {
> +				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> +					((u8 *)&array->entries[0] +
> +					 (entry->ucUVDClockInfoIndex * sizeof(UVDClockInfo)));
> +				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> +					le16_to_cpu(uvd_clk->usVClkLow) | (uvd_clk->ucVClkHigh << 16);
> +				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> +					le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16);
> +				adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> +					le16_to_cpu(entry->usVoltage);
> +				entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> +			}
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> +			ext_hdr->usSAMUTableOffset) {
> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> +				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usSAMUTableOffset) + 1);
> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> +			u32 size = limits->numEntries *
> +				sizeof(struct amdgpu_clock_voltage_dependency_entry);
> +			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> +				kzalloc(size, GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> +				limits->numEntries;
> +			entry = &limits->entries[0];
> +			for (i = 0; i < limits->numEntries; i++) {
> +				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> +					le16_to_cpu(entry->usSAMClockLow) | (entry->ucSAMClockHigh << 16);
> +				adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
> +					le16_to_cpu(entry->usVoltage);
> +				entry = (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> +			}
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> +		    ext_hdr->usPPMTableOffset) {
> +			ATOM_PPLIB_PPM_Table *ppm = (ATOM_PPLIB_PPM_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> +			adev->pm.dpm.dyn_state.ppm_table =
> +				kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.ppm_table) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
> +			adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
> +				le16_to_cpu(ppm->usCpuCoreNumber);
> +			adev->pm.dpm.dyn_state.ppm_table->platform_tdp =
> +				le32_to_cpu(ppm->ulPlatformTDP);
> +			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdp =
> +				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> +			adev->pm.dpm.dyn_state.ppm_table->platform_tdc =
> +				le32_to_cpu(ppm->ulPlatformTDC);
> +			adev->pm.dpm.dyn_state.ppm_table->small_ac_platform_tdc =
> +				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> +			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> +				le32_to_cpu(ppm->ulApuTDP);
> +			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> +				le32_to_cpu(ppm->ulDGpuTDP);
> +			adev->pm.dpm.dyn_state.ppm_table->dgpu_ulv_power =
> +				le32_to_cpu(ppm->ulDGpuUlvPower);
> +			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> +				le32_to_cpu(ppm->ulTjmax);
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> +			ext_hdr->usACPTableOffset) {
> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> +				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usACPTableOffset) + 1);
> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> +			u32 size = limits->numEntries *
> +				sizeof(struct amdgpu_clock_voltage_dependency_entry);
> +			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> +				kzalloc(size, GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> +				limits->numEntries;
> +			entry = &limits->entries[0];
> +			for (i = 0; i < limits->numEntries; i++) {
> +				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
> +					le16_to_cpu(entry->usACPClockLow) | (entry->ucACPClockHigh << 16);
> +				adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> +					le16_to_cpu(entry->usVoltage);
> +				entry = (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> +					((u8 *)entry + sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> +			}
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> +			ext_hdr->usPowerTuneTableOffset) {
> +			u8 rev = *(u8 *)(mode_info->atom_context->bios + data_offset +
> +					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> +			ATOM_PowerTune_Table *pt;
> +			adev->pm.dpm.dyn_state.cac_tdp_table =
> +				kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
> +			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> +				amdgpu_free_extended_power_table(adev);
> +				return -ENOMEM;
> +			}
> +			if (rev > 0) {
> +				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> +					(mode_info->atom_context->bios + data_offset +
> +					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> +				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit =
> +					ppt->usMaximumPowerDeliveryLimit;
> +				pt = &ppt->power_tune_table;
> +			} else {
> +				ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
> +					(mode_info->atom_context->bios + data_offset +
> +					 le16_to_cpu(ext_hdr->usPowerTuneTableOffset));
> +				adev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 255;
> +				pt = &ppt->power_tune_table;
> +			}
> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdp = le16_to_cpu(pt->usTDP);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->configurable_tdp =
> +				le16_to_cpu(pt->usConfigurableTDP);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdc = le16_to_cpu(pt->usTDC);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->battery_power_limit =
> +				le16_to_cpu(pt->usBatteryPowerLimit);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->small_power_limit =
> +				le16_to_cpu(pt->usSmallPowerLimit);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->low_cac_leakage =
> +				le16_to_cpu(pt->usLowCACLeakage);
> +			adev->pm.dpm.dyn_state.cac_tdp_table->high_cac_leakage =
> +				le16_to_cpu(pt->usHighCACLeakage);
> +		}
> +		if ((le16_to_cpu(ext_hdr->usSize) >= SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> +				ext_hdr->usSclkVddgfxTableOffset) {
> +			dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> +				(mode_info->atom_context->bios + data_offset +
> +				 le16_to_cpu(ext_hdr->usSclkVddgfxTableOffset));
> +			ret = amdgpu_parse_clk_voltage_dep_table(
> +					&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> +					dep_table);
> +			if (ret) {
> +				kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> +				return ret;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev->pm.dpm.dyn_state;
> +
> +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> +	kfree(dyn_state->cac_leakage_table.entries);
> +	kfree(dyn_state->phase_shedding_limits_table.entries);
> +	kfree(dyn_state->ppm_table);
> +	kfree(dyn_state->cac_tdp_table);
> +	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> +	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> +	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> +	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> +}
> +
> +static const char *pp_lib_thermal_controller_names[] = {
> +	"NONE",
> +	"lm63",
> +	"adm1032",
> +	"adm1030",
> +	"max6649",
> +	"lm64",
> +	"f75375",
> +	"RV6xx",
> +	"RV770",
> +	"adt7473",
> +	"NONE",
> +	"External GPIO",
> +	"Evergreen",
> +	"emc2103",
> +	"Sumo",
> +	"Northern Islands",
> +	"Southern Islands",
> +	"lm96163",
> +	"Sea Islands",
> +	"Kaveri/Kabini",
> +};
> +
> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> +	ATOM_PPLIB_THERMALCONTROLLER *controller;
> +	struct amdgpu_i2c_bus_rec i2c_bus;
> +	u16 data_offset;
> +	u8 frev, crev;
> +
> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context, index, NULL,
> +				   &frev, &crev, &data_offset))
> +		return;
> +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> +		(mode_info->atom_context->bios + data_offset);
> +	controller = &power_table->sThermalController;
> +
> +	/* add the i2c bus for thermal/fan chip */
> +	if (controller->ucType > 0) {
> +		if (controller->ucFanParameters & ATOM_PP_FANPARAMETERS_NOFAN)
> +			adev->pm.no_fan = true;
> +		adev->pm.fan_pulses_per_revolution =
> +			controller->ucFanParameters & ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
> +		if (adev->pm.fan_pulses_per_revolution) {
> +			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> +			adev->pm.fan_max_rpm = controller->ucFanMaxRPM;
> +		}
> +		if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_RV6XX;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV770) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_RV770;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_EVERGREEN;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SUMO) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_SUMO;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_KAVERI) {
> +			DRM_INFO("Internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> +		} else if (controller->ucType == ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> +			DRM_INFO("External GPIO thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL_GPIO;
> +		} else if (controller->ucType ==
> +			   ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> +			DRM_INFO("ADT7473 with internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> +		} else if (controller->ucType ==
> +			   ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> +			DRM_INFO("EMC2103 with internal thermal controller %s fan control\n",
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> +		} else if (controller->ucType < ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> +			DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
> +				 pp_lib_thermal_controller_names[controller->ucType],
> +				 controller->ucI2cAddress >> 1,
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +			adev->pm.int_thermal_type = THERMAL_TYPE_EXTERNAL;
> +			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev, controller->ucI2cLine);
> +			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev, &i2c_bus);
> +			if (adev->pm.i2c_bus) {
> +				struct i2c_board_info info = { };
> +				const char *name = pp_lib_thermal_controller_names[controller->ucType];
> +				info.addr = controller->ucI2cAddress >> 1;
> +				strlcpy(info.type, name, sizeof(info.type));
> +				i2c_new_client_device(&adev->pm.i2c_bus->adapter, &info);
> +			}
> +		} else {
> +			DRM_INFO("Unknown thermal controller type %d at 0x%02x %s fan control\n",
> +				 controller->ucType,
> +				 controller->ucI2cAddress >> 1,
> +				 (controller->ucFanParameters &
> +				  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
> +		}
> +	}
> +}
> +
> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32 idx)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	if (idx < adev->pm.dpm.num_of_vce_states)
> +		return &adev->pm.dpm.vce_states[idx];
> +
> +	return NULL;
> +}
> +
> +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
> +						     enum amd_pm_state_type dpm_state)
> +{
> +	int i;
> +	struct amdgpu_ps *ps;
> +	u32 ui_class;
> +	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> +		true : false;
> +
> +	/* check if the vblank period is too short to adjust the mclk */
> +	if (single_display && adev->powerplay.pp_funcs->vblank_too_short) {
> +		if (amdgpu_dpm_vblank_too_short(adev))
> +			single_display = false;
> +	}
> +
> +	/* certain older asics have a separare 3D performance state,
> +	 * so try that first if the user selected performance
> +	 */
> +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> +		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> +	/* balanced states don't exist at the moment */
> +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> +
> +restart_search:
> +	/* Pick the best power state based on current conditions */
> +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> +		ps = &adev->pm.dpm.ps[i];
> +		ui_class = ps->class & ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> +		switch (dpm_state) {
> +		/* user states */
> +		case POWER_STATE_TYPE_BATTERY:
> +			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> +				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> +					if (single_display)
> +						return ps;
> +				} else
> +					return ps;
> +			}
> +			break;
> +		case POWER_STATE_TYPE_BALANCED:
> +			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> +				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> +					if (single_display)
> +						return ps;
> +				} else
> +					return ps;
> +			}
> +			break;
> +		case POWER_STATE_TYPE_PERFORMANCE:
> +			if (ui_class == ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> +				if (ps->caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> +					if (single_display)
> +						return ps;
> +				} else
> +					return ps;
> +			}
> +			break;
> +		/* internal states */
> +		case POWER_STATE_TYPE_INTERNAL_UVD:
> +			if (adev->pm.dpm.uvd_ps)
> +				return adev->pm.dpm.uvd_ps;
> +			else
> +				break;
> +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_BOOT:
> +			return adev->pm.dpm.boot_ps;
> +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_ACPI:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_ULV:
> +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> +				return ps;
> +			break;
> +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> +				return ps;
> +			break;
> +		default:
> +			break;
> +		}
> +	}
> +	/* use a fallback state if we didn't match */
> +	switch (dpm_state) {
> +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> +		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> +		goto restart_search;
> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> +		if (adev->pm.dpm.uvd_ps) {
> +			return adev->pm.dpm.uvd_ps;
> +		} else {
> +			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> +			goto restart_search;
> +		}
> +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> +		goto restart_search;
> +	case POWER_STATE_TYPE_INTERNAL_ACPI:
> +		dpm_state = POWER_STATE_TYPE_BATTERY;
> +		goto restart_search;
> +	case POWER_STATE_TYPE_BATTERY:
> +	case POWER_STATE_TYPE_BALANCED:
> +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> +		goto restart_search;
> +	default:
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +int amdgpu_dpm_change_power_state_locked(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	struct amdgpu_ps *ps;
> +	enum amd_pm_state_type dpm_state;
> +	int ret;
> +	bool equal = false;
> +
> +	/* if dpm init failed */
> +	if (!adev->pm.dpm_enabled)
> +		return 0;
> +
> +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> +		/* add other state override checks here */
> +		if ((!adev->pm.dpm.thermal_active) &&
> +		    (!adev->pm.dpm.uvd_active))
> +			adev->pm.dpm.state = adev->pm.dpm.user_state;
> +	}
> +	dpm_state = adev->pm.dpm.state;
> +
> +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> +	if (ps)
> +		adev->pm.dpm.requested_ps = ps;
> +	else
> +		return -EINVAL;
> +
> +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs->print_power_state) {
> +		printk("switching from power state:\n");
> +		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.current_ps);
> +		printk("switching to power state:\n");
> +		amdgpu_dpm_print_power_state(adev, adev->pm.dpm.requested_ps);
> +	}
> +
> +	/* update whether vce is active */
> +	ps->vce_active = adev->pm.dpm.vce_active;
> +	if (adev->powerplay.pp_funcs->display_configuration_changed)
> +		amdgpu_dpm_display_configuration_changed(adev);
> +
> +	ret = amdgpu_dpm_pre_set_power_state(adev);
> +	if (ret)
> +		return ret;
> +
> +	if (adev->powerplay.pp_funcs->check_state_equal) {
> +		if (0 != amdgpu_dpm_check_state_equal(adev, adev->pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> +			equal = false;
> +	}
> +
> +	if (equal)
> +		return 0;
> +
> +	if (adev->powerplay.pp_funcs->set_power_state)
> +		adev->powerplay.pp_funcs->set_power_state(adev->powerplay.pp_handle);
> +
> +	amdgpu_dpm_post_set_power_state(adev);
> +
> +	adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
> +	adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
> +
> +	if (adev->powerplay.pp_funcs->force_performance_level) {
> +		if (adev->pm.dpm.thermal_active) {
> +			enum amd_dpm_forced_level level = adev->pm.dpm.forced_level;
> +			/* force low perf level for thermal */
> +			amdgpu_dpm_force_performance_level(adev, AMD_DPM_FORCED_LEVEL_LOW);
> +			/* save the user's level */
> +			adev->pm.dpm.forced_level = level;
> +		} else {
> +			/* otherwise, user selected level */
> +			amdgpu_dpm_force_performance_level(adev, adev->pm.dpm.forced_level);
> +		}
> +	}
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> new file mode 100644
> index 000000000000..4adc765c8824
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> @@ -0,0 +1,70 @@
> +/*
> + * Copyright 2021 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#ifndef __LEGACY_DPM_H__
> +#define __LEGACY_DPM_H__
> +
> +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device *adev,
> +					    u32 clock,
> +					    bool strobe_mode,
> +					    struct atom_mpll_param *mpll_param);
> +
> +void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device *adev,
> +					     u32 eng_clock, u32 mem_clock);
> +
> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device *adev,
> +					  u16 *vddc, u16 *vddci, u16 *mvdd);
> +
> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8 voltage_type,
> +			     u16 voltage_id, u16 *voltage);
> +
> +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct amdgpu_device *adev,
> +						      u16 *voltage,
> +						      u16 leakage_idx);
> +
> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> +			      u8 voltage_type,
> +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> +
> +bool
> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> +				u8 voltage_type, u8 voltage_mode);
> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> +				      u8 voltage_type, u8 voltage_mode,
> +				      struct atom_voltage_table *voltage_table);
> +
> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> +				      u8 module_index,
> +				      struct atom_mc_reg_table *reg_table);
> +
> +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> +void amdgpu_dpm_print_cap_info(u32 caps);
> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> +				struct amdgpu_ps *rps);
> +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> +int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
> +void amdgpu_free_extended_power_table(struct amdgpu_device *adev);
> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32 idx);
> +int amdgpu_dpm_change_power_state_locked(void *handle);
> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> +#endif
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> index 4f84d8b893f1..a2881c90d187 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> @@ -37,6 +37,7 @@
>   #include <linux/math64.h>
>   #include <linux/seq_file.h>
>   #include <linux/firmware.h>
> +#include <legacy_dpm.h>
>   
>   #define MC_CG_ARB_FREQ_F0           0x0a
>   #define MC_CG_ARB_FREQ_F1           0x0b
> @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs si_dpm_funcs = {
>   	.check_state_equal = &si_check_state_equal,
>   	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>   	.read_sensor = &si_dpm_read_sensor,
> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>   };
>   
>   static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task()
  2021-11-30  7:42 ` [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task() Evan Quan
@ 2021-11-30 13:48   ` Lazar, Lijo
  2021-12-01  3:50     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 13:48 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> We should avoid having multi-function APIs. It should be up to the caller
> to determine when or whether to call amdgpu_dpm_dispatch_task().
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I78ec4eb8ceb6e526a4734113d213d15a5fbaa8a4
> ---
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 18 ++----------------
>   drivers/gpu/drm/amd/pm/amdgpu_pm.c  | 26 ++++++++++++++++++++++++--
>   2 files changed, 26 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index c6299e406848..8f0ae58f4292 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -558,8 +558,6 @@ void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
>   				enum amd_pm_state_type state)
>   {
>   	adev->pm.dpm.user_state = state;
> -
> -	amdgpu_dpm_dispatch_task(adev, AMD_PP_TASK_ENABLE_USER_STATE, &state);
>   }
>   
>   enum amd_dpm_forced_level amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
> @@ -727,13 +725,7 @@ int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
>   	if (!pp_funcs->set_sclk_od)
>   		return -EOPNOTSUPP;
>   
> -	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
> -
> -	amdgpu_dpm_dispatch_task(adev,
> -				 AMD_PP_TASK_READJUST_POWER_STATE,
> -				 NULL);
> -
> -	return 0;
> +	return pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
>   }
>   
>   int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
> @@ -753,13 +745,7 @@ int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t value)
>   	if (!pp_funcs->set_mclk_od)
>   		return -EOPNOTSUPP;
>   
> -	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
> -
> -	amdgpu_dpm_dispatch_task(adev,
> -				 AMD_PP_TASK_READJUST_POWER_STATE,
> -				 NULL);
> -
> -	return 0;
> +	return pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
>   }
>   
>   int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> index fa2f4e11e94e..89e1134d660f 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> @@ -187,6 +187,10 @@ static ssize_t amdgpu_set_power_dpm_state(struct device *dev,
>   
>   	amdgpu_dpm_set_power_state(adev, state);
>   
> +	amdgpu_dpm_dispatch_task(adev,
> +				 AMD_PP_TASK_ENABLE_USER_STATE,
> +				 &state);
> +

This is just the opposite of what has been done so far. The idea is to 
keep the logic inside dpm_* calls and not to keep the logic in 
amdgpu_pm. This does the reverse. I guess this patch can be dropped.

Thanks,
Lijo

>   	pm_runtime_mark_last_busy(ddev->dev);
>   	pm_runtime_put_autosuspend(ddev->dev);
>   
> @@ -1278,7 +1282,16 @@ static ssize_t amdgpu_set_pp_sclk_od(struct device *dev,
>   		return ret;
>   	}
>   
> -	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> +	ret = amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> +	if (ret) {
> +		pm_runtime_mark_last_busy(ddev->dev);
> +		pm_runtime_put_autosuspend(ddev->dev);
> +		return ret;
> +	}
> +
> +	amdgpu_dpm_dispatch_task(adev,
> +				 AMD_PP_TASK_READJUST_POWER_STATE,
> +				 NULL);
>   
>   	pm_runtime_mark_last_busy(ddev->dev);
>   	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1340,7 +1353,16 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
>   		return ret;
>   	}
>   
> -	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> +	ret = amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> +	if (ret) {
> +		pm_runtime_mark_last_busy(ddev->dev);
> +		pm_runtime_put_autosuspend(ddev->dev);
> +		return ret;
> +	}
> +
> +	amdgpu_dpm_dispatch_task(adev,
> +				 AMD_PP_TASK_READJUST_POWER_STATE,
> +				 NULL);
>   
>   	pm_runtime_mark_last_busy(ddev->dev);
>   	pm_runtime_put_autosuspend(ddev->dev);
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power
  2021-11-30  7:42 ` [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power Evan Quan
@ 2021-11-30 13:57   ` Lazar, Lijo
  2021-12-01  5:39     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 13:57 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> This can cover the power implementation details. And as what did for
> powerplay framework, we hook the smu_context to adev->powerplay.pp_handle.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I3969c9f62a8b63dc6e4321a488d8f15022ffeb3d
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  6 --
>   .../gpu/drm/amd/include/kgd_pp_interface.h    |  9 +++
>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 51 ++++++++++------
>   drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       | 11 +---
>   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 60 +++++++++++++------
>   .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  9 +--
>   .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |  9 +--
>   .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |  9 +--
>   .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  4 +-
>   .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  9 +--
>   .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  8 +--
>   11 files changed, 111 insertions(+), 74 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index c987813a4996..fefabd568483 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -99,7 +99,6 @@
>   #include "amdgpu_gem.h"
>   #include "amdgpu_doorbell.h"
>   #include "amdgpu_amdkfd.h"
> -#include "amdgpu_smu.h"
>   #include "amdgpu_discovery.h"
>   #include "amdgpu_mes.h"
>   #include "amdgpu_umc.h"
> @@ -950,11 +949,6 @@ struct amdgpu_device {
>   
>   	/* powerplay */
>   	struct amd_powerplay		powerplay;
> -
> -	/* smu */
> -	struct smu_context		smu;
> -
> -	/* dpm */
>   	struct amdgpu_pm		pm;
>   	u32				cg_flags;
>   	u32				pg_flags;
> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> index 7919e96e772b..da6a82430048 100644
> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> @@ -25,6 +25,9 @@
>   #define __KGD_PP_INTERFACE_H__
>   
>   extern const struct amdgpu_ip_block_version pp_smu_ip_block;
> +extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> +extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> +extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
>   
>   enum smu_event_type {
>   	SMU_EVENT_RESET_COMPLETE = 0,
> @@ -244,6 +247,12 @@ enum pp_power_type
>   	PP_PWR_TYPE_FAST,
>   };
>   
> +enum smu_ppt_limit_type
> +{
> +	SMU_DEFAULT_PPT_LIMIT = 0,
> +	SMU_FAST_PPT_LIMIT,
> +};
> +

This is a contradiction. If the entry point is dpm, this shouldn't be 
here and the external interface doesn't need to know about internal 
datatypes.

>   #define PP_GROUP_MASK        0xF0000000
>   #define PP_GROUP_SHIFT       28
>   
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index 8f0ae58f4292..a5cbbf9367fe 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -31,6 +31,7 @@
>   #include "amdgpu_display.h"
>   #include "hwmgr.h"
>   #include <linux/power_supply.h>
> +#include "amdgpu_smu.h"
>   
>   #define amdgpu_dpm_enable_bapm(adev, e) \
>   		((adev)->powerplay.pp_funcs->enable_bapm((adev)->powerplay.pp_handle, (e)))
> @@ -213,7 +214,7 @@ int amdgpu_dpm_baco_reset(struct amdgpu_device *adev)
>   
>   bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (is_support_sw_smu(adev))
>   		return smu_mode1_reset_is_support(smu);
> @@ -223,7 +224,7 @@ bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
>   
>   int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (is_support_sw_smu(adev))
>   		return smu_mode1_reset(smu);
> @@ -276,7 +277,7 @@ int amdgpu_dpm_set_df_cstate(struct amdgpu_device *adev,
>   
>   int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device *adev, bool en)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (is_support_sw_smu(adev))
>   		return smu_allow_xgmi_power_down(smu, en);
> @@ -341,7 +342,7 @@ void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
>   		mutex_unlock(&adev->pm.mutex);
>   
>   		if (is_support_sw_smu(adev))
> -			smu_set_ac_dc(&adev->smu);
> +			smu_set_ac_dc(adev->powerplay.pp_handle);
>   	}
>   }
>   
> @@ -423,15 +424,16 @@ int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_versio
>   
>   int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
>   {
> -	return smu_set_light_sbr(&adev->smu, enable);
> +	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
>   }
>   
>   int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	int ret = 0;
>   
> -	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->send_hbm_bad_pages_num)
> -		ret = adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, size);
> +	if (is_support_sw_smu(adev))
> +		ret = smu_send_hbm_bad_pages_num(smu, size);
>   
>   	return ret;
>   }
> @@ -446,7 +448,7 @@ int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
>   
>   	switch (type) {
>   	case PP_SCLK:
> -		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, min, max);
> +		return smu_get_dpm_freq_range(adev->powerplay.pp_handle, SMU_SCLK, min, max);
>   	default:
>   		return -EINVAL;
>   	}
> @@ -457,12 +459,14 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>   				   uint32_t min,
>   				   uint32_t max)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
>   	if (!is_support_sw_smu(adev))
>   		return -EOPNOTSUPP;
>   
>   	switch (type) {
>   	case PP_SCLK:
> -		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK, min, max);
> +		return smu_set_soft_freq_range(smu, SMU_SCLK, min, max);
>   	default:
>   		return -EINVAL;
>   	}
> @@ -470,33 +474,41 @@ int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>   
>   int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
>   	if (!is_support_sw_smu(adev))
>   		return 0;
>   
> -	return smu_write_watermarks_table(&adev->smu);
> +	return smu_write_watermarks_table(smu);
>   }
>   
>   int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
>   			      enum smu_event_type event,
>   			      uint64_t event_arg)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
>   	if (!is_support_sw_smu(adev))
>   		return -EOPNOTSUPP;
>   
> -	return smu_wait_for_event(&adev->smu, event, event_arg);
> +	return smu_wait_for_event(smu, event, event_arg);
>   }
>   
>   int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
>   	if (!is_support_sw_smu(adev))
>   		return -EOPNOTSUPP;
>   
> -	return smu_get_status_gfxoff(&adev->smu, value);
> +	return smu_get_status_gfxoff(smu, value);
>   }
>   
>   uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct amdgpu_device *adev)
>   {
> -	return atomic64_read(&adev->smu.throttle_int_counter);
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
> +	return atomic64_read(&smu->throttle_int_counter);
>   }
>   
>   /* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
> @@ -518,10 +530,12 @@ void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
>   int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>   			    void *umc_ecc)
>   {
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
>   	if (!is_support_sw_smu(adev))
>   		return -EOPNOTSUPP;
>   
> -	return smu_get_ecc_info(&adev->smu, umc_ecc);
> +	return smu_get_ecc_info(smu, umc_ecc);
>   }
>   
>   struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct amdgpu_device *adev,
> @@ -919,9 +933,10 @@ int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
>   int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
>   {
>   	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
> -	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
> -	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
> +	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
> +	    (is_support_sw_smu(adev) && smu->is_apu) ||
>   		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
>   		return true;
>   
> @@ -944,7 +959,9 @@ int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
>   
>   int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
>   {
> -	return adev->smu.cpu_core_num;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
> +	return smu->cpu_core_num;
>   }
>   
>   void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> index 29791bb21fba..f44139b415b4 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> @@ -205,12 +205,6 @@ enum smu_power_src_type
>   	SMU_POWER_SOURCE_COUNT,
>   };
>   
> -enum smu_ppt_limit_type
> -{
> -	SMU_DEFAULT_PPT_LIMIT = 0,
> -	SMU_FAST_PPT_LIMIT,
> -};
> -
>   enum smu_ppt_limit_level
>   {
>   	SMU_PPT_LIMIT_MIN = -1,
> @@ -1389,10 +1383,6 @@ int smu_mode1_reset(struct smu_context *smu);
>   
>   extern const struct amd_ip_funcs smu_ip_funcs;
>   
> -extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> -extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> -extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
> -
>   bool is_support_sw_smu(struct amdgpu_device *adev);
>   bool is_support_cclk_dpm(struct amdgpu_device *adev);
>   int smu_write_watermarks_table(struct smu_context *smu);
> @@ -1416,6 +1406,7 @@ int smu_wait_for_event(struct smu_context *smu, enum smu_event_type event,
>   int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
>   int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t size);
>   void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev);
> +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t size);
>   
>   #endif
>   #endif
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> index eaed5aba7547..2c3fd3cfef05 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> @@ -468,7 +468,7 @@ bool is_support_sw_smu(struct amdgpu_device *adev)
>   
>   bool is_support_cclk_dpm(struct amdgpu_device *adev)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
>   		return false;
> @@ -572,7 +572,7 @@ static int smu_get_driver_allowed_feature_mask(struct smu_context *smu)
>   
>   static int smu_set_funcs(struct amdgpu_device *adev)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
>   		smu->od_enabled = true;
> @@ -624,7 +624,11 @@ static int smu_set_funcs(struct amdgpu_device *adev)
>   static int smu_early_init(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu;
> +
> +	smu = kzalloc(sizeof(struct smu_context), GFP_KERNEL);
> +	if (!smu)
> +		return -ENOMEM;
>   
>   	smu->adev = adev;
>   	smu->pm_enabled = !!amdgpu_dpm;
> @@ -684,7 +688,7 @@ static int smu_set_default_dpm_table(struct smu_context *smu)
>   static int smu_late_init(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	int ret = 0;
>   
>   	smu_set_fine_grain_gfx_freq_parameters(smu);
> @@ -730,7 +734,7 @@ static int smu_late_init(void *handle)
>   
>   	smu_get_fan_parameters(smu);
>   
> -	smu_handle_task(&adev->smu,
> +	smu_handle_task(smu,
>   			smu->smu_dpm.dpm_level,
>   			AMD_PP_TASK_COMPLETE_INIT,
>   			false);
> @@ -1020,7 +1024,7 @@ static void smu_interrupt_work_fn(struct work_struct *work)
>   static int smu_sw_init(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	int ret;
>   
>   	smu->pool_size = adev->pm.smu_prv_buffer_size;
> @@ -1095,7 +1099,7 @@ static int smu_sw_init(void *handle)
>   static int smu_sw_fini(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	int ret;
>   
>   	ret = smu_smc_table_sw_fini(smu);
> @@ -1330,7 +1334,7 @@ static int smu_hw_init(void *handle)
>   {
>   	int ret;
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev)) {
>   		smu->pm_enabled = false;
> @@ -1344,10 +1348,10 @@ static int smu_hw_init(void *handle)
>   	}
>   
>   	if (smu->is_apu) {
> -		smu_powergate_sdma(&adev->smu, false);
> +		smu_powergate_sdma(smu, false);
>   		smu_dpm_set_vcn_enable(smu, true);
>   		smu_dpm_set_jpeg_enable(smu, true);
> -		smu_set_gfx_cgpg(&adev->smu, true);
> +		smu_set_gfx_cgpg(smu, true);
>   	}
>   
>   	if (!smu->pm_enabled)
> @@ -1501,13 +1505,13 @@ static int smu_smc_hw_cleanup(struct smu_context *smu)
>   static int smu_hw_fini(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
>   		return 0;
>   
>   	if (smu->is_apu) {
> -		smu_powergate_sdma(&adev->smu, true);
> +		smu_powergate_sdma(smu, true);
>   	}
>   
>   	smu_dpm_set_vcn_enable(smu, false);
> @@ -1524,6 +1528,14 @@ static int smu_hw_fini(void *handle)
>   	return smu_smc_hw_cleanup(smu);
>   }
>   
> +static void smu_late_fini(void *handle)
> +{
> +	struct amdgpu_device *adev = handle;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +
> +	kfree(smu);
> +}
> +

This doesn't look related to this change.

>   static int smu_reset(struct smu_context *smu)
>   {
>   	struct amdgpu_device *adev = smu->adev;
> @@ -1551,7 +1563,7 @@ static int smu_reset(struct smu_context *smu)
>   static int smu_suspend(void *handle)
>   {
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	int ret;
>   
>   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> @@ -1570,7 +1582,7 @@ static int smu_suspend(void *handle)
>   
>   	/* skip CGPG when in S0ix */
>   	if (smu->is_apu && !adev->in_s0ix)
> -		smu_set_gfx_cgpg(&adev->smu, false);
> +		smu_set_gfx_cgpg(smu, false);
>   
>   	return 0;
>   }
> @@ -1579,7 +1591,7 @@ static int smu_resume(void *handle)
>   {
>   	int ret;
>   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
>   		return 0;
> @@ -1602,7 +1614,7 @@ static int smu_resume(void *handle)
>   	}
>   
>   	if (smu->is_apu)
> -		smu_set_gfx_cgpg(&adev->smu, true);
> +		smu_set_gfx_cgpg(smu, true);
>   
>   	smu->disable_uclk_switch = 0;
>   
> @@ -2134,6 +2146,7 @@ const struct amd_ip_funcs smu_ip_funcs = {
>   	.sw_fini = smu_sw_fini,
>   	.hw_init = smu_hw_init,
>   	.hw_fini = smu_hw_fini,
> +	.late_fini = smu_late_fini,
>   	.suspend = smu_suspend,
>   	.resume = smu_resume,
>   	.is_idle = NULL,
> @@ -3198,7 +3211,7 @@ int smu_stb_collect_info(struct smu_context *smu, void *buf, uint32_t size)
>   static int smu_stb_debugfs_open(struct inode *inode, struct file *filp)
>   {
>   	struct amdgpu_device *adev = filp->f_inode->i_private;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	unsigned char *buf;
>   	int r;
>   
> @@ -3223,7 +3236,7 @@ static ssize_t smu_stb_debugfs_read(struct file *filp, char __user *buf, size_t
>   				loff_t *pos)
>   {
>   	struct amdgpu_device *adev = filp->f_inode->i_private;
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   
>   	if (!filp->private_data)
> @@ -3264,7 +3277,7 @@ void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev)
>   {
>   #if defined(CONFIG_DEBUG_FS)
>   
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   
>   	if (!smu->stb_context.stb_buf_size)
>   		return;
> @@ -3276,5 +3289,14 @@ void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev)
>   			    &smu_stb_debugfs_fops,
>   			    smu->stb_context.stb_buf_size);
>   #endif
> +}
> +
> +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t size)
> +{
> +	int ret = 0;
> +
> +	if (smu->ppt_funcs->send_hbm_bad_pages_num)
> +		ret = smu->ppt_funcs->send_hbm_bad_pages_num(smu, size);
>   
> +	return ret;

This also looks unrelated.

Thanks,
Lijo

>   }
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> index 05defeee0c87..a03bbd2a7aa0 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> @@ -2082,7 +2082,8 @@ static int arcturus_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			     struct i2c_msg *msg, int num_msgs)
>   {
>   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +	struct smu_table_context *smu_table = &smu->smu_table;
>   	struct smu_table *table = &smu_table->driver_table;
>   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>   	int i, j, r, c;
> @@ -2128,9 +2129,9 @@ static int arcturus_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			}
>   		}
>   	}
> -	mutex_lock(&adev->smu.mutex);
> -	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> -	mutex_unlock(&adev->smu.mutex);
> +	mutex_lock(&smu->mutex);
> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> +	mutex_unlock(&smu->mutex);
>   	if (r)
>   		goto fail;
>   
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> index 2bb7816b245a..37e11716e919 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> @@ -2779,7 +2779,8 @@ static int navi10_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			   struct i2c_msg *msg, int num_msgs)
>   {
>   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +	struct smu_table_context *smu_table = &smu->smu_table;
>   	struct smu_table *table = &smu_table->driver_table;
>   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>   	int i, j, r, c;
> @@ -2825,9 +2826,9 @@ static int navi10_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			}
>   		}
>   	}
> -	mutex_lock(&adev->smu.mutex);
> -	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> -	mutex_unlock(&adev->smu.mutex);
> +	mutex_lock(&smu->mutex);
> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> +	mutex_unlock(&smu->mutex);
>   	if (r)
>   		goto fail;
>   
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> index 777f717c37ae..6a5064f4ea86 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> @@ -3459,7 +3459,8 @@ static int sienna_cichlid_i2c_xfer(struct i2c_adapter *i2c_adap,
>   				   struct i2c_msg *msg, int num_msgs)
>   {
>   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +	struct smu_table_context *smu_table = &smu->smu_table;
>   	struct smu_table *table = &smu_table->driver_table;
>   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>   	int i, j, r, c;
> @@ -3505,9 +3506,9 @@ static int sienna_cichlid_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			}
>   		}
>   	}
> -	mutex_lock(&adev->smu.mutex);
> -	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> -	mutex_unlock(&adev->smu.mutex);
> +	mutex_lock(&smu->mutex);
> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> +	mutex_unlock(&smu->mutex);
>   	if (r)
>   		goto fail;
>   
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> index 28b7c0562b99..2a53b5b1d261 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> @@ -1372,7 +1372,7 @@ static int smu_v11_0_set_irq_state(struct amdgpu_device *adev,
>   				   unsigned tyep,
>   				   enum amdgpu_interrupt_state state)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	uint32_t low, high;
>   	uint32_t val = 0;
>   
> @@ -1441,7 +1441,7 @@ static int smu_v11_0_irq_process(struct amdgpu_device *adev,
>   				 struct amdgpu_irq_src *source,
>   				 struct amdgpu_iv_entry *entry)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	uint32_t client_id = entry->client_id;
>   	uint32_t src_id = entry->src_id;
>   	/*
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> index 6e781cee8bb6..3c82f5455f88 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> @@ -1484,7 +1484,8 @@ static int aldebaran_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			      struct i2c_msg *msg, int num_msgs)
>   {
>   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
> +	struct smu_table_context *smu_table = &smu->smu_table;
>   	struct smu_table *table = &smu_table->driver_table;
>   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>   	int i, j, r, c;
> @@ -1530,9 +1531,9 @@ static int aldebaran_i2c_xfer(struct i2c_adapter *i2c_adap,
>   			}
>   		}
>   	}
> -	mutex_lock(&adev->smu.mutex);
> -	r = smu_cmn_update_table(&adev->smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> -	mutex_unlock(&adev->smu.mutex);
> +	mutex_lock(&smu->mutex);
> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
> +	mutex_unlock(&smu->mutex);
>   	if (r)
>   		goto fail;
>   
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> index 55421ea622fb..4ed01e9d88fb 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> @@ -1195,7 +1195,7 @@ static int smu_v13_0_set_irq_state(struct amdgpu_device *adev,
>   				   unsigned tyep,
>   				   enum amdgpu_interrupt_state state)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	uint32_t low, high;
>   	uint32_t val = 0;
>   
> @@ -1270,7 +1270,7 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
>   				 struct amdgpu_irq_src *source,
>   				 struct amdgpu_iv_entry *entry)
>   {
> -	struct smu_context *smu = &adev->smu;
> +	struct smu_context *smu = adev->powerplay.pp_handle;
>   	uint32_t client_id = entry->client_id;
>   	uint32_t src_id = entry->src_id;
>   	/*
> @@ -1316,11 +1316,11 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
>   			switch (ctxid) {
>   			case 0x3:
>   				dev_dbg(adev->dev, "Switched to AC mode!\n");
> -				smu_v13_0_ack_ac_dc_interrupt(&adev->smu);
> +				smu_v13_0_ack_ac_dc_interrupt(smu);
>   				break;
>   			case 0x4:
>   				dev_dbg(adev->dev, "Switched to DC mode!\n");
> -				smu_v13_0_ack_ac_dc_interrupt(&adev->smu);
> +				smu_v13_0_ack_ac_dc_interrupt(smu);
>   				break;
>   			case 0x7:
>   				/*
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 14/17] drm/amd/pm: relocate the power related headers
  2021-11-30  7:42 ` [PATCH V2 14/17] drm/amd/pm: relocate the power related headers Evan Quan
@ 2021-11-30 14:07   ` Lazar, Lijo
  2021-12-01  6:22     ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-11-30 14:07 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:12 PM, Evan Quan wrote:
> Instead of centralizing all headers in the same folder. Separate them into
> different folders and place them among those source files those who really
> need them.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: Id74cb4c7006327ca7ecd22daf17321e417c4aa71
> ---
>   drivers/gpu/drm/amd/pm/Makefile               | 10 +++---
>   drivers/gpu/drm/amd/pm/legacy-dpm/Makefile    | 32 +++++++++++++++++++
>   .../pm/{powerplay => legacy-dpm}/cik_dpm.h    |  0
>   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.c |  0
>   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.h |  0
>   .../amd/pm/{powerplay => legacy-dpm}/kv_smc.c |  0
>   .../pm/{powerplay => legacy-dpm}/legacy_dpm.c |  0
>   .../pm/{powerplay => legacy-dpm}/legacy_dpm.h |  0
>   .../amd/pm/{powerplay => legacy-dpm}/ppsmc.h  |  0
>   .../pm/{powerplay => legacy-dpm}/r600_dpm.h   |  0
>   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.c |  0
>   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.h |  0
>   .../amd/pm/{powerplay => legacy-dpm}/si_smc.c |  0
>   .../{powerplay => legacy-dpm}/sislands_smc.h  |  0
>   drivers/gpu/drm/amd/pm/powerplay/Makefile     |  6 +---
>   .../pm/{ => powerplay}/inc/amd_powerplay.h    |  0
>   .../drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h |  0
>   .../amd/pm/{ => powerplay}/inc/fiji_ppsmc.h   |  0
>   .../pm/{ => powerplay}/inc/hardwaremanager.h  |  0
>   .../drm/amd/pm/{ => powerplay}/inc/hwmgr.h    |  0
>   .../{ => powerplay}/inc/polaris10_pwrvirus.h  |  0
>   .../amd/pm/{ => powerplay}/inc/power_state.h  |  0
>   .../drm/amd/pm/{ => powerplay}/inc/pp_debug.h |  0
>   .../amd/pm/{ => powerplay}/inc/pp_endian.h    |  0
>   .../amd/pm/{ => powerplay}/inc/pp_thermal.h   |  0
>   .../amd/pm/{ => powerplay}/inc/ppinterrupt.h  |  0
>   .../drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu10.h    |  0
>   .../pm/{ => powerplay}/inc/smu10_driver_if.h  |  0
>   .../pm/{ => powerplay}/inc/smu11_driver_if.h  |  0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu71.h    |  0
>   .../pm/{ => powerplay}/inc/smu71_discrete.h   |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu72.h    |  0
>   .../pm/{ => powerplay}/inc/smu72_discrete.h   |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu73.h    |  0
>   .../pm/{ => powerplay}/inc/smu73_discrete.h   |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu74.h    |  0
>   .../pm/{ => powerplay}/inc/smu74_discrete.h   |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smu75.h    |  0
>   .../pm/{ => powerplay}/inc/smu75_discrete.h   |  0
>   .../amd/pm/{ => powerplay}/inc/smu7_common.h  |  0
>   .../pm/{ => powerplay}/inc/smu7_discrete.h    |  0
>   .../amd/pm/{ => powerplay}/inc/smu7_fusion.h  |  0
>   .../amd/pm/{ => powerplay}/inc/smu7_ppsmc.h   |  0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h |  0
>   .../amd/pm/{ => powerplay}/inc/smu8_fusion.h  |  0
>   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h |  0
>   .../pm/{ => powerplay}/inc/smu9_driver_if.h   |  0
>   .../{ => powerplay}/inc/smu_ucode_xfer_cz.h   |  0
>   .../{ => powerplay}/inc/smu_ucode_xfer_vi.h   |  0
>   .../drm/amd/pm/{ => powerplay}/inc/smumgr.h   |  0
>   .../amd/pm/{ => powerplay}/inc/tonga_ppsmc.h  |  0
>   .../amd/pm/{ => powerplay}/inc/vega10_ppsmc.h |  0
>   .../inc/vega12/smu9_driver_if.h               |  0
>   .../amd/pm/{ => powerplay}/inc/vega12_ppsmc.h |  0
>   .../amd/pm/{ => powerplay}/inc/vega20_ppsmc.h |  0
>   .../amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h  |  0
>   .../drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h   |  0
>   .../amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h   |  0
>   .../inc/smu11_driver_if_arcturus.h            |  0
>   .../inc/smu11_driver_if_cyan_skillfish.h      |  0
>   .../{ => swsmu}/inc/smu11_driver_if_navi10.h  |  0
>   .../inc/smu11_driver_if_sienna_cichlid.h      |  0
>   .../{ => swsmu}/inc/smu11_driver_if_vangogh.h |  0
>   .../amd/pm/{ => swsmu}/inc/smu12_driver_if.h  |  0
>   .../inc/smu13_driver_if_aldebaran.h           |  0
>   .../inc/smu13_driver_if_yellow_carp.h         |  0
>   .../pm/{ => swsmu}/inc/smu_11_0_cdr_table.h   |  0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_types.h    |  0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h    |  0
>   .../pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h    |  0
>   .../pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h  |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h  |  0
>   .../pm/{ => swsmu}/inc/smu_v11_0_pptable.h    |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h   |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h  |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h   |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h  |  0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h    |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h  |  0
>   .../drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h    |  0
>   .../amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h |  0
>   .../pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h    |  0
>   .../pm/{ => swsmu}/inc/smu_v13_0_pptable.h    |  0
>   .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  1 -
>   .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  1 -
>   87 files changed, 39 insertions(+), 11 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/cik_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_smc.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/legacy_dpm.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/legacy_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/r600_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.h (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_smc.c (100%)
>   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/sislands_smc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/amd_powerplay.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/fiji_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hardwaremanager.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hwmgr.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/polaris10_pwrvirus.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/power_state.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_debug.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_endian.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_thermal.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/ppinterrupt.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu11_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_common.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_discrete.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_fusion.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8_fusion.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_cz.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu_ucode_xfer_vi.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smumgr.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/tonga_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega10_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12/smu9_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega20_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_arcturus.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_cyan_skillfish.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_navi10.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_sienna_cichlid.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu11_driver_if_vangogh.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu12_driver_if.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_aldebaran.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu13_driver_if_yellow_carp.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_11_0_cdr_table.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_types.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_pptable.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h (100%)
>   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0_pptable.h (100%)
> 
> diff --git a/drivers/gpu/drm/amd/pm/Makefile b/drivers/gpu/drm/amd/pm/Makefile
> index d35ffde387f1..84c7203b5e46 100644
> --- a/drivers/gpu/drm/amd/pm/Makefile
> +++ b/drivers/gpu/drm/amd/pm/Makefile
> @@ -21,20 +21,22 @@
>   #
>   
>   subdir-ccflags-y += \
> -		-I$(FULL_AMD_PATH)/pm/inc/  \
>   		-I$(FULL_AMD_PATH)/include/asic_reg  \
>   		-I$(FULL_AMD_PATH)/include  \
> +		-I$(FULL_AMD_PATH)/pm/inc/  \
>   		-I$(FULL_AMD_PATH)/pm/swsmu \
> +		-I$(FULL_AMD_PATH)/pm/swsmu/inc \
>   		-I$(FULL_AMD_PATH)/pm/swsmu/smu11 \
>   		-I$(FULL_AMD_PATH)/pm/swsmu/smu12 \
>   		-I$(FULL_AMD_PATH)/pm/swsmu/smu13 \
> -		-I$(FULL_AMD_PATH)/pm/powerplay \
> +		-I$(FULL_AMD_PATH)/pm/powerplay/inc \
>   		-I$(FULL_AMD_PATH)/pm/powerplay/smumgr\
> -		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr
> +		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr \
> +		-I$(FULL_AMD_PATH)/pm/legacy-dpm
>   
>   AMD_PM_PATH = ../pm
>   
> -PM_LIBS = swsmu powerplay
> +PM_LIBS = swsmu powerplay legacy-dpm
>   
>   AMD_PM = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/,$(PM_LIBS)))
>   
> diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> new file mode 100644
> index 000000000000..baa4265d1daa
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> @@ -0,0 +1,32 @@
> +#
> +# Copyright 2021 Advanced Micro Devices, Inc.
> +#
> +# Permission is hereby granted, free of charge, to any person obtaining a
> +# copy of this software and associated documentation files (the "Software"),
> +# to deal in the Software without restriction, including without limitation
> +# the rights to use, copy, modify, merge, publish, distribute, sublicense,
> +# and/or sell copies of the Software, and to permit persons to whom the
> +# Software is furnished to do so, subject to the following conditions:
> +#
> +# The above copyright notice and this permission notice shall be included in
> +# all copies or substantial portions of the Software.
> +#
> +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> +# THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> +# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> +# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> +# OTHER DEALINGS IN THE SOFTWARE.
> +#
> +
> +AMD_LEGACYDPM_PATH = ../pm/legacy-dpm
> +
> +LEGACYDPM_MGR-y = legacy_dpm.o
> +
> +LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
> +LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
> +
> +AMD_LEGACYDPM_POWER = $(addprefix $(AMD_LEGACYDPM_PATH)/,$(LEGACYDPM_MGR-y))
> +
> +AMD_POWERPLAY_FILES += $(AMD_LEGACYDPM_POWER)
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_smc.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/kv_smc.c
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/ppsmc.h b/drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/ppsmc.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_smc.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/si_smc.c
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h b/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h
> rename to drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> index 614d8b6a58ad..795a3624cbbf 100644
> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> @@ -28,11 +28,7 @@ AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
>   
>   include $(AMD_POWERPLAY)
>   
> -POWER_MGR-y = amd_powerplay.o legacy_dpm.o
> -
> -POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
> -
> -POWER_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
> +POWER_MGR-y = amd_powerplay.o
>   
>   AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR-y))
>   
> diff --git a/drivers/gpu/drm/amd/pm/inc/amd_powerplay.h b/drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/amd_powerplay.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/hardwaremanager.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/hardwaremanager.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/hwmgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/hwmgr.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h b/drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/power_state.h b/drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/power_state.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/pp_debug.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/pp_debug.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/pp_endian.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/pp_endian.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/pp_thermal.h b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/pp_thermal.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/ppinterrupt.h b/drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/ppinterrupt.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu10.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu10.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu7.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu7.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu71.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu71.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu71_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu71_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu72.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu72.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu72_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu72_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu73.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu73.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu73_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu73_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu74.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu74.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu74_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu74_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu75.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu75.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu75_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu75_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_common.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu7_common.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_discrete.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu7_discrete.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_fusion.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu7_fusion.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu8.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu8.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu8_fusion.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu8_fusion.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu9.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu9.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smumgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smumgr.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h b/drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h


Generic comment -
	swsmu/inc => Only common headers
	smuXY/ => All specific headers

Ex: smu11/smu11_driver_if_arcturus.h

Thanks,
Lijo

> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_types.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_types.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h

> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
> diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
> similarity index 100%
> rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h
> rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> index a03bbd2a7aa0..1e6d76657bbb 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> @@ -33,7 +33,6 @@
>   #include "smu11_driver_if_arcturus.h"
>   #include "soc15_common.h"
>   #include "atom.h"
> -#include "power_state.h"
>   #include "arcturus_ppt.h"
>   #include "smu_v11_0_pptable.h"
>   #include "arcturus_ppsmc.h"
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> index 3c82f5455f88..cc502a35f9ef 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> @@ -33,7 +33,6 @@
>   #include "smu13_driver_if_aldebaran.h"
>   #include "soc15_common.h"
>   #include "atom.h"
> -#include "power_state.h"
>   #include "aldebaran_ppt.h"
>   #include "smu_v13_0_pptable.h"
>   #include "aldebaran_ppsmc.h"
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-11-30  8:09   ` Lazar, Lijo
@ 2021-12-01  1:59     ` Quan, Evan
  2021-12-01  3:33       ` Lazar, Lijo
  2021-12-01  3:37     ` Lazar, Lijo
  1 sibling, 1 reply; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  1:59 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of
> Lazar, Lijo
> Sent: Tuesday, November 30, 2021 4:10 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Feng, Kenneth
> <Kenneth.Feng@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
> Subject: Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation
> details to other blocks out of power
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > Those implementation details(whether swsmu supported, some ppt_funcs
> > supported, accessing internal statistics ...)should be kept
> > internally. It's not a good practice and even error prone to expose
> implementation details.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
> > ---
> >   drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
> >   drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
> >   .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95
> +++++++++++++++++++
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
> >   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
> >   13 files changed, 155 insertions(+), 64 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> > b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> > index bcfdb63b1d42..a545df4efce1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> > @@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct
> amdgpu_device *adev)
> >   	adev->gfx.rlc.funcs->resume(adev);
> >
> >   	/* Wait for FW reset event complete */
> > -	r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
> > +	r = amdgpu_dpm_wait_for_event(adev,
> SMU_EVENT_RESET_COMPLETE, 0);
> >   	if (r) {
> >   		dev_err(adev->dev,
> >   			"Failed to get response from firmware after reset\n");
> diff --git
> > a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > index 164d6a9e9fbb..0d1f00b24aae 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > @@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void *data,
> u64 val)
> >   		return ret;
> >   	}
> >
> > -	if (is_support_sw_smu(adev)) {
> > -		ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> &min_freq, &max_freq);
> > -		if (ret || val > max_freq || val < min_freq)
> > -			return -EINVAL;
> > -		ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> (uint32_t)val, (uint32_t)val);
> > -	} else {
> > -		return 0;
> > +	ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK,
> &min_freq, &max_freq);
> > +	if (ret == -EOPNOTSUPP) {
> > +		ret = 0;
> > +		goto out;
> >   	}
> > +	if (ret || val > max_freq || val < min_freq) {
> > +		ret = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK,
> (uint32_t)val, (uint32_t)val);
> > +	if (ret)
> > +		ret = -EINVAL;
> >
> > +out:
> >   	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> >   	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> >
> > -	if (ret)
> > -		return -EINVAL;
> > -
> > -	return 0;
> > +	return ret;
> >   }
> >
> >   DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL, diff --git
> > a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index 1989f9e9379e..41cc1ffb5809 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct
> amdgpu_device *adev)
> >   	if (adev->asic_type == CHIP_ARCTURUS &&
> >   	    amdgpu_passthrough(adev) &&
> >   	    adev->gmc.xgmi.num_physical_nodes > 1)
> > -		smu_set_light_sbr(&adev->smu, true);
> > +		amdgpu_dpm_set_light_sbr(adev, true);
> >
> >   	if (adev->gmc.xgmi.num_physical_nodes > 1) {
> >   		mutex_lock(&mgpu_info.mutex);
> > @@ -2857,7 +2857,7 @@ static int
> amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
> >   	int i, r;
> >
> >   	if (adev->in_s0ix)
> > -		amdgpu_gfx_state_change_set(adev,
> sGpuChangeState_D3Entry);
> > +		amdgpu_dpm_gfx_state_change(adev,
> sGpuChangeState_D3Entry);
> >
> >   	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
> >   		if (!adev->ip_blocks[i].status.valid)
> > @@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device
> *dev, bool fbcon)
> >   		return 0;
> >
> >   	if (adev->in_s0ix)
> > -		amdgpu_gfx_state_change_set(adev,
> sGpuChangeState_D0Entry);
> > +		amdgpu_dpm_gfx_state_change(adev,
> sGpuChangeState_D0Entry);
> >
> >   	/* post card */
> >   	if (amdgpu_device_need_post(adev)) { diff --git
> > a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> > index 1916ec84dd71..3d8f82dc8c97 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> > @@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct
> amdgpu_device
> > *adev, uint32_t *value)
> >
> >   	mutex_lock(&adev->gfx.gfx_off_mutex);
> >
> > -	r = smu_get_status_gfxoff(adev, value);
> > +	r = amdgpu_dpm_get_status_gfxoff(adev, value);
> >
> >   	mutex_unlock(&adev->gfx.gfx_off_mutex);
> >
> > @@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct
> amdgpu_device *adev)
> >   	}
> >   	return amdgpu_num_kcq;
> >   }
> > -
> > -/* amdgpu_gfx_state_change_set - Handle gfx power state change set
> > - * @adev: amdgpu_device pointer
> > - * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
> > -sGpuChangeState_D3Entry)
> > - *
> > - */
> > -
> > -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum
> > gfx_change_state state) -{
> > -	mutex_lock(&adev->pm.mutex);
> > -	if (adev->powerplay.pp_funcs &&
> > -	    adev->powerplay.pp_funcs->gfx_state_change_set)
> > -		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> > -			(adev)->powerplay.pp_handle, state));
> > -	mutex_unlock(&adev->pm.mutex);
> > -}
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > index f851196c83a5..776c886fd94a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> > @@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
> >   	AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
> >   };
> >
> > -/* Argument for PPSMC_MSG_GpuChangeState */ -enum
> gfx_change_state {
> > -	sGpuChangeState_D0Entry = 1,
> > -	sGpuChangeState_D3Entry,
> > -};
> > -
> >   #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
> >   #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
> >
> > @@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct
> amdgpu_device *adev,
> >   uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
> >   void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg,
> uint32_t v);
> >   int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev); -void
> > amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum
> gfx_change_state state);
> >   #endif
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> > index 3c623e589b79..35c4aec04a7e 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> > @@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct
> amdgpu_device *adev, struct ras_err_d
> >   	 * choosing right query method according to
> >   	 * whether smu support query error information
> >   	 */
> > -	ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
> > +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
> >   	if (ret == -EOPNOTSUPP) {
> >   		if (adev->umc.ras_funcs &&
> >   			adev->umc.ras_funcs->query_ras_error_count)
> > @@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct
> amdgpu_device *adev)
> >   		if (ret)
> >   			goto free;
> >
> > -		if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num)
> > -			adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num(&adev->smu, con-
> >eeprom_control.ras_num_recs);
> > +		amdgpu_dpm_send_hbm_bad_pages_num(adev,
> > +con->eeprom_control.ras_num_recs);
> >   	}
> >
> >   #ifdef CONFIG_X86_MCE_AMD
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> > index 6e4bea012ea4..5fed26c8db44 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> > @@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct
> amdgpu_device *adev,
> >   	int ret = 0;
> >
> >   	kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
> > -	ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
> > +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
> >   	if (ret == -EOPNOTSUPP) {
> >   		if (adev->umc.ras_funcs &&
> >   		    adev->umc.ras_funcs->query_ras_error_count)
> > @@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct
> amdgpu_device *adev,
> >   						err_data->err_addr_cnt);
> >   			amdgpu_ras_save_bad_pages(adev);
> >
> > -			if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num)
> > -				adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num(&adev->smu, con-
> >eeprom_control.ras_num_recs);
> > +			amdgpu_dpm_send_hbm_bad_pages_num(adev,
> > +con->eeprom_control.ras_num_recs);
> >   		}
> >
> >   		amdgpu_ras_reset_gpu(adev);
> > diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > index deae12dc777d..329a4c89f1e6 100644
> > --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > @@ -222,7 +222,7 @@ void
> > kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
> >
> >   	len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
> >   		       KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
> > -		       atomic64_read(&dev->adev->smu.throttle_int_counter));
> > +		       amdgpu_dpm_get_thermal_throttling_counter(dev-
> >adev));
> >
> >   	add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,
> 	fifo_in, len);
> >   }
> > diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > index 5c0867ebcfce..2e295facd086 100644
> > --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > @@ -26,6 +26,10 @@
> >
> >   extern const struct amdgpu_ip_block_version pp_smu_ip_block;
> >
> > +enum smu_event_type {
> > +	SMU_EVENT_RESET_COMPLETE = 0,
> > +};
> > +
> >   struct amd_vce_state {
> >   	/* vce clocks */
> >   	u32 evclk;
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index 08362d506534..9b332c8a0079 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct
> > amdgpu_device *adev, uint32_t *smu_versio
> >
> >   	return 0;
> >   }
> > +
> > +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> enable)
> > +{
> > +	return smu_set_light_sbr(&adev->smu, enable); }
> > +
> > +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> *adev,
> > +uint32_t size) {
> > +	int ret = 0;
> > +
> > +	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num)
> > +		ret = adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num(&adev->smu,
> > +size);
> > +
> > +	return ret;
> > +}
> > +
> > +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
> > +				  enum pp_clock_type type,
> > +				  uint32_t *min,
> > +				  uint32_t *max)
> > +{
> > +	if (!is_support_sw_smu(adev))
> > +		return -EOPNOTSUPP;
> > +
> > +	switch (type) {
> > +	case PP_SCLK:
> > +		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> min, max);
> > +	default:
> > +		return -EINVAL;
> > +	}
> > +}
> > +
> > +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> > +				   enum pp_clock_type type,
> > +				   uint32_t min,
> > +				   uint32_t max)
> > +{
> > +	if (!is_support_sw_smu(adev))
> > +		return -EOPNOTSUPP;
> > +
> > +	switch (type) {
> > +	case PP_SCLK:
> > +		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> min, max);
> > +	default:
> > +		return -EINVAL;
> > +	}
> > +}
> > +
> > +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
> > +			      enum smu_event_type event,
> > +			      uint64_t event_arg)
> > +{
> > +	if (!is_support_sw_smu(adev))
> > +		return -EOPNOTSUPP;
> > +
> > +	return smu_wait_for_event(&adev->smu, event, event_arg); }
> > +
> > +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> uint32_t
> > +*value) {
> > +	if (!is_support_sw_smu(adev))
> > +		return -EOPNOTSUPP;
> > +
> > +	return smu_get_status_gfxoff(&adev->smu, value); }
> > +
> > +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
> > +amdgpu_device *adev) {
> > +	return atomic64_read(&adev->smu.throttle_int_counter);
> > +}
> > +
> > +/* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
> > + * @adev: amdgpu_device pointer
> > + * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
> > +-sGpuChangeState_D3Entry)
> > + *
> > + */
> > +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
> > +				 enum gfx_change_state state)
> > +{
> > +	mutex_lock(&adev->pm.mutex);
> > +	if (adev->powerplay.pp_funcs &&
> > +	    adev->powerplay.pp_funcs->gfx_state_change_set)
> > +		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> > +			(adev)->powerplay.pp_handle, state));
> > +	mutex_unlock(&adev->pm.mutex);
> > +}
> > +
> > +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> > +			    void *umc_ecc)
> > +{
> > +	if (!is_support_sw_smu(adev))
> > +		return -EOPNOTSUPP;
> > +
> 
> In general, I don't think we need to keep this check everywhere to make
> amdgpu_dpm_* backwards compatible.  The usage is also inconsistent. For
> ex: amdgpu_dpm_get_thermal_throttling_counter doesn't have any
> is_support_sw_smu check whereas amdgpu_dpm_get_ecc_info() has it.
> There is no reason to keep adding is_support_sw_smu() check for every new
> public API. For sure, they are not going to work with powerplay subsystem.
> 
> I would rather prefer to leave old things and create amdgpu_smu_* for
> anything which is supported only in smu subsystem. It's easier to read from
> code perspective also - separate the ones which is supported by smu
> component and not supported in older powerplay components.
> 
> Only for the common ones that are supported in powerplay and smu, keep
> amdgpu_dpm_*, for others preference would be to keep amdgpu_smu_*.
[Quan, Evan] I get your point. However, then it will bring back the problem we are trying to avoid. 
That is the caller need to know whether the amdgpu_smu_* can be used. They need to know whether the swsmu framework is supported on some ASIC.

And yes, there is some inconsistent cases existing in current power code. Maybe we can create new patche(s) to fix them?
For this patch series, I would like to avoid any real code logic change.

BR
Evan
> 
> Thanks,
> Lijo
> 
> > +	return smu_get_ecc_info(&adev->smu, umc_ecc); }
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > index 16e3f72d31b9..7289d379a9fb 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > @@ -23,6 +23,12 @@
> >   #ifndef __AMDGPU_DPM_H__
> >   #define __AMDGPU_DPM_H__
> >
> > +/* Argument for PPSMC_MSG_GpuChangeState */ enum
> gfx_change_state {
> > +	sGpuChangeState_D0Entry = 1,
> > +	sGpuChangeState_D3Entry,
> > +};
> > +
> >   enum amdgpu_int_thermal_type {
> >   	THERMAL_TYPE_NONE,
> >   	THERMAL_TYPE_EXTERNAL,
> > @@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct
> amdgpu_device *adev, bool enable);
> >   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> enable);
> >   void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> >   int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
> uint32_t
> > *smu_version);
> > -
> > +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> > +enable); int amdgpu_dpm_send_hbm_bad_pages_num(struct
> amdgpu_device
> > +*adev, uint32_t size); int amdgpu_dpm_get_dpm_freq_range(struct
> amdgpu_device *adev,
> > +				       enum pp_clock_type type,
> > +				       uint32_t *min,
> > +				       uint32_t *max);
> > +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> > +				        enum pp_clock_type type,
> > +				        uint32_t min,
> > +				        uint32_t max);
> > +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum
> smu_event_type event,
> > +		       uint64_t event_arg);
> > +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> uint32_t
> > +*value); uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
> > +amdgpu_device *adev); void amdgpu_dpm_gfx_state_change(struct
> amdgpu_device *adev,
> > +				 enum gfx_change_state state);
> > +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> > +			    void *umc_ecc);
> >   #endif
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > index f738f7dc20c9..29791bb21fba 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > @@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
> >   	uint32_t clk_dependency;
> >   };
> >
> > -enum smu_event_type {
> > -
> > -	SMU_EVENT_RESET_COMPLETE = 0,
> > -};
> > -
> >   #define SMU_TABLE_INIT(tables, table_id, s, a, d)	\
> >   	do {						\
> >   		tables[table_id].size = s;		\
> > @@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context *smu);
> >
> >   int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
> >
> > -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
> > *value);
> > +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value);
> >
> >   int smu_set_light_sbr(struct smu_context *smu, bool enable);
> >
> > -int smu_wait_for_event(struct amdgpu_device *adev, enum
> > smu_event_type event,
> > +int smu_wait_for_event(struct smu_context *smu, enum
> smu_event_type
> > +event,
> >   		       uint64_t event_arg);
> >   int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
> >   int smu_stb_collect_info(struct smu_context *smu, void *buff,
> > uint32_t size); diff --git
> a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > index 5839918cb574..ef7d0e377965 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > @@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void
> *handle,
> >   	return ret;
> >   }
> >
> > -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
> > *value)
> > +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
> >   {
> > -	int ret = 0;
> > -	struct smu_context *smu = &adev->smu;
> > +	if (!smu->ppt_funcs->get_gfx_off_status)
> > +		return -EINVAL;
> >
> > -	if (is_support_sw_smu(adev) && smu->ppt_funcs-
> >get_gfx_off_status)
> > -		*value = smu_get_gfx_off_status(smu);
> > -	else
> > -		ret = -EINVAL;
> > +	*value = smu_get_gfx_off_status(smu);
> >
> > -	return ret;
> > +	return 0;
> >   }
> >
> >   int smu_set_soft_freq_range(struct smu_context *smu, @@ -3167,11
> > +3164,10 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
> >   	.get_smu_prv_buf_details = smu_get_prv_buffer_details,
> >   };
> >
> > -int smu_wait_for_event(struct amdgpu_device *adev, enum
> > smu_event_type event,
> > +int smu_wait_for_event(struct smu_context *smu, enum
> smu_event_type
> > +event,
> >   		       uint64_t event_arg)
> >   {
> >   	int ret = -EINVAL;
> > -	struct smu_context *smu = &adev->smu;
> >
> >   	if (smu->ppt_funcs->wait_for_event) {
> >   		mutex_lock(&smu->mutex);
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c
  2021-11-30 13:04   ` Chen, Guchun
@ 2021-12-01  2:06     ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  2:06 UTC (permalink / raw)
  To: Chen, Guchun, amd-gfx
  Cc: Deucher, Alexander, Lazar, Lijo, Feng, Kenneth, Koenig,  Christian

[Public]



> -----Original Message-----
> From: Chen, Guchun <Guchun.Chen@amd.com>
> Sent: Tuesday, November 30, 2021 9:05 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Lazar, Lijo
> <Lijo.Lazar@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>; Koenig,
> Christian <Christian.Koenig@amd.com>; Quan, Evan <Evan.Quan@amd.com>
> Subject: RE: [PATCH V2 02/17] drm/amd/pm: do not expose power
> implementation details to amdgpu_pm.c
> 
> [Public]
> 
> Two nit-picks.
> 
> 1. It's better to drop "return" in amdgpu_dpm_get_current_power_state.
[Quan, Evan] I can do that.
> 
> 2. In some functions, when function pointer is NULL, sometimes it returns 0,
> while in other cases, it returns -EOPNOTSUPP. Is there any cause for this?
[Quan, Evan] It is to stick with original logic. We might update them later(by new patches).
For this patch series, I would like to maintain minimum changes(avoid real logic change).

Thanks
Evan
> 
> Regards,
> Guchun
> 
> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Evan
> Quan
> Sent: Tuesday, November 30, 2021 3:43 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Lazar, Lijo
> <Lijo.Lazar@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>; Koenig,
> Christian <Christian.Koenig@amd.com>; Quan, Evan <Evan.Quan@amd.com>
> Subject: [PATCH V2 02/17] drm/amd/pm: do not expose power
> implementation details to amdgpu_pm.c
> 
> amdgpu_pm.c holds all the user sysfs/hwmon interfaces. It's another
> client of our power APIs. It's not proper to spike into power
> implementation details there.
> 
> Signed-off-by: Evan Quan <evan.quan@amd.com>
> Change-Id: I397853ddb13eacfce841366de2a623535422df9a
> ---
>  drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 458 ++++++++++++++++++-
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c        | 519 ++++++++--------------
>  drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 160 +++----
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c |   3 -
>  4 files changed, 709 insertions(+), 431 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> index 9b332c8a0079..3c59f16c7a6f 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> @@ -1453,7 +1453,9 @@ static void
> amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
>  	if (equal)
>  		return;
> 
> -	amdgpu_dpm_set_power_state(adev);
> +	if (adev->powerplay.pp_funcs->set_power_state)
> +		adev->powerplay.pp_funcs->set_power_state(adev-
> >powerplay.pp_handle);
> +
>  	amdgpu_dpm_post_set_power_state(adev);
> 
>  	adev->pm.dpm.current_active_crtcs = adev-
> >pm.dpm.new_active_crtcs;
> @@ -1709,3 +1711,457 @@ int amdgpu_dpm_get_ecc_info(struct
> amdgpu_device *adev,
> 
>  	return smu_get_ecc_info(&adev->smu, umc_ecc);
>  }
> +
> +struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct
> amdgpu_device *adev,
> +						     uint32_t idx)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_vce_clock_state)
> +		return NULL;
> +
> +	return pp_funcs->get_vce_clock_state(adev->powerplay.pp_handle,
> +					     idx);
> +}
> +
> +void amdgpu_dpm_get_current_power_state(struct amdgpu_device
> *adev,
> +					enum amd_pm_state_type *state)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_current_power_state) {
> +		*state = adev->pm.dpm.user_state;
> +		return;
> +	}
> +
> +	*state = pp_funcs->get_current_power_state(adev-
> >powerplay.pp_handle);
> +	if (*state < POWER_STATE_TYPE_DEFAULT ||
> +	    *state > POWER_STATE_TYPE_INTERNAL_3DPERF)
> +		*state = adev->pm.dpm.user_state;
> +
> +	return;
> +}
> +
> +void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
> +				enum amd_pm_state_type state)
> +{
> +	adev->pm.dpm.user_state = state;
> +
> +	if (adev->powerplay.pp_funcs->dispatch_tasks)
> +		amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_ENABLE_USER_STATE, &state);
> +	else
> +		amdgpu_pm_compute_clocks(adev);
> +}
> +
> +enum amd_dpm_forced_level
> amdgpu_dpm_get_performance_level(struct amdgpu_device *adev)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +	enum amd_dpm_forced_level level;
> +
> +	if (pp_funcs->get_performance_level)
> +		level = pp_funcs->get_performance_level(adev-
> >powerplay.pp_handle);
> +	else
> +		level = adev->pm.dpm.forced_level;
> +
> +	return level;
> +}
> +
> +int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
> +				       enum amd_dpm_forced_level level)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (pp_funcs->force_performance_level) {
> +		if (adev->pm.dpm.thermal_active)
> +			return -EINVAL;
> +
> +		if (pp_funcs->force_performance_level(adev-
> >powerplay.pp_handle,
> +						      level))
> +			return -EINVAL;
> +	}
> +
> +	adev->pm.dpm.forced_level = level;
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
> +				 struct pp_states_info *states)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_pp_num_states)
> +		return -EOPNOTSUPP;
> +
> +	return pp_funcs->get_pp_num_states(adev->powerplay.pp_handle,
> states);
> +}
> +
> +int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
> +			      enum amd_pp_task task_id,
> +			      enum amd_pm_state_type *user_state)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->dispatch_tasks)
> +		return -EOPNOTSUPP;
> +
> +	return pp_funcs->dispatch_tasks(adev->powerplay.pp_handle,
> task_id, user_state);
> +}
> +
> +int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char **table)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_pp_table)
> +		return 0;
> +
> +	return pp_funcs->get_pp_table(adev->powerplay.pp_handle, table);
> +}
> +
> +int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
> +				      uint32_t type,
> +				      long *input,
> +				      uint32_t size)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_fine_grain_clk_vol)
> +		return 0;
> +
> +	return pp_funcs->set_fine_grain_clk_vol(adev-
> >powerplay.pp_handle,
> +						type,
> +						input,
> +						size);
> +}
> +
> +int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
> +				  uint32_t type,
> +				  long *input,
> +				  uint32_t size)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->odn_edit_dpm_table)
> +		return 0;
> +
> +	return pp_funcs->odn_edit_dpm_table(adev-
> >powerplay.pp_handle,
> +					    type,
> +					    input,
> +					    size);
> +}
> +
> +int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
> +				  enum pp_clock_type type,
> +				  char *buf)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->print_clock_levels)
> +		return 0;
> +
> +	return pp_funcs->print_clock_levels(adev->powerplay.pp_handle,
> +					    type,
> +					    buf);
> +}
> +
> +int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
> +				    uint64_t ppfeature_masks)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_ppfeature_status)
> +		return 0;
> +
> +	return pp_funcs->set_ppfeature_status(adev-
> >powerplay.pp_handle,
> +					      ppfeature_masks);
> +}
> +
> +int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev,
> char *buf)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_ppfeature_status)
> +		return 0;
> +
> +	return pp_funcs->get_ppfeature_status(adev-
> >powerplay.pp_handle,
> +					      buf);
> +}
> +
> +int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
> +				 enum pp_clock_type type,
> +				 uint32_t mask)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->force_clock_level)
> +		return 0;
> +
> +	return pp_funcs->force_clock_level(adev->powerplay.pp_handle,
> +					   type,
> +					   mask);
> +}
> +
> +int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_sclk_od)
> +		return 0;
> +
> +	return pp_funcs->get_sclk_od(adev->powerplay.pp_handle);
> +}
> +
> +int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_sclk_od)
> +		return -EOPNOTSUPP;
> +
> +	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
> +
> +	if (amdgpu_dpm_dispatch_task(adev,
> +				     AMD_PP_TASK_READJUST_POWER_STATE,
> +				     NULL) == -EOPNOTSUPP) {
> +		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
> +		amdgpu_pm_compute_clocks(adev);
> +	}
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_mclk_od)
> +		return 0;
> +
> +	return pp_funcs->get_mclk_od(adev->powerplay.pp_handle);
> +}
> +
> +int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t
> value)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_mclk_od)
> +		return -EOPNOTSUPP;
> +
> +	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
> +
> +	if (amdgpu_dpm_dispatch_task(adev,
> +				     AMD_PP_TASK_READJUST_POWER_STATE,
> +				     NULL) == -EOPNOTSUPP) {
> +		adev->pm.dpm.current_ps = adev->pm.dpm.boot_ps;
> +		amdgpu_pm_compute_clocks(adev);
> +	}
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
> +				      char *buf)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_power_profile_mode)
> +		return -EOPNOTSUPP;
> +
> +	return pp_funcs->get_power_profile_mode(adev-
> >powerplay.pp_handle,
> +						buf);
> +}
> +
> +int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
> +				      long *input, uint32_t size)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_power_profile_mode)
> +		return 0;
> +
> +	return pp_funcs->set_power_profile_mode(adev-
> >powerplay.pp_handle,
> +						input,
> +						size);
> +}
> +
> +int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void
> **table)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_gpu_metrics)
> +		return 0;
> +
> +	return pp_funcs->get_gpu_metrics(adev->powerplay.pp_handle,
> table);
> +}
> +
> +int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
> +				    uint32_t *fan_mode)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_fan_control_mode)
> +		return -EOPNOTSUPP;
> +
> +	*fan_mode = pp_funcs->get_fan_control_mode(adev-
> >powerplay.pp_handle);
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
> +				 uint32_t speed)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_fan_speed_pwm)
> +		return -EINVAL;
> +
> +	return pp_funcs->set_fan_speed_pwm(adev-
> >powerplay.pp_handle, speed);
> +}
> +
> +int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
> +				 uint32_t *speed)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_fan_speed_pwm)
> +		return -EINVAL;
> +
> +	return pp_funcs->get_fan_speed_pwm(adev-
> >powerplay.pp_handle, speed);
> +}
> +
> +int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
> +				 uint32_t *speed)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_fan_speed_rpm)
> +		return -EINVAL;
> +
> +	return pp_funcs->get_fan_speed_rpm(adev->powerplay.pp_handle,
> speed);
> +}
> +
> +int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
> +				 uint32_t speed)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_fan_speed_rpm)
> +		return -EINVAL;
> +
> +	return pp_funcs->set_fan_speed_rpm(adev->powerplay.pp_handle,
> speed);
> +}
> +
> +int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
> +				    uint32_t mode)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_fan_control_mode)
> +		return -EOPNOTSUPP;
> +
> +	pp_funcs->set_fan_control_mode(adev->powerplay.pp_handle,
> mode);
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
> +			       uint32_t *limit,
> +			       enum pp_power_limit_level pp_limit_level,
> +			       enum pp_power_type power_type)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_power_limit)
> +		return -ENODATA;
> +
> +	return pp_funcs->get_power_limit(adev->powerplay.pp_handle,
> +					 limit,
> +					 pp_limit_level,
> +					 power_type);
> +}
> +
> +int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
> +			       uint32_t limit)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_power_limit)
> +		return -EINVAL;
> +
> +	return pp_funcs->set_power_limit(adev->powerplay.pp_handle,
> limit);
> +}
> +
> +int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return false;
> +
> +	return is_support_cclk_dpm(adev);
> +}
> +
> +int amdgpu_dpm_debugfs_print_current_performance_level(struct
> amdgpu_device *adev,
> +						       struct seq_file *m)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->debugfs_print_current_performance_level)
> +		return -EOPNOTSUPP;
> +
> +	pp_funcs->debugfs_print_current_performance_level(adev-
> >powerplay.pp_handle,
> +							  m);
> +
> +	return 0;
> +}
> +
> +int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
> +				       void **addr,
> +				       size_t *size)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->get_smu_prv_buf_details)
> +		return -ENOSYS;
> +
> +	return pp_funcs->get_smu_prv_buf_details(adev-
> >powerplay.pp_handle,
> +						 addr,
> +						 size);
> +}
> +
> +int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
> +{
> +	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
> +
> +	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
> +	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
> +		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
> +		return true;
> +
> +	return false;
> +}
> +
> +int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
> +			    const char *buf,
> +			    size_t size)
> +{
> +	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> +
> +	if (!pp_funcs->set_pp_table)
> +		return -EOPNOTSUPP;
> +
> +	return pp_funcs->set_pp_table(adev->powerplay.pp_handle,
> +				      buf,
> +				      size);
> +}
> +
> +int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
> +{
> +	return adev->smu.cpu_core_num;
> +}
> +
> +void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
> +{
> +	if (!is_support_sw_smu(adev))
> +		return;
> +
> +	amdgpu_smu_stb_debug_fs_init(adev);
> +}
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> index 082539c70fd4..3382d30b5d90 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> @@ -34,7 +34,6 @@
>  #include <linux/nospec.h>
>  #include <linux/pm_runtime.h>
>  #include <asm/processor.h>
> -#include "hwmgr.h"
> 
>  static const struct cg_flag_name clocks[] = {
>  	{AMD_CG_SUPPORT_GFX_FGCG, "Graphics Fine Grain Clock
> Gating"},
> @@ -132,7 +131,6 @@ static ssize_t amdgpu_get_power_dpm_state(struct
> device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	enum amd_pm_state_type pm;
>  	int ret;
> 
> @@ -147,11 +145,7 @@ static ssize_t amdgpu_get_power_dpm_state(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (pp_funcs->get_current_power_state) {
> -		pm = amdgpu_dpm_get_current_power_state(adev);
> -	} else {
> -		pm = adev->pm.dpm.user_state;
> -	}
> +	amdgpu_dpm_get_current_power_state(adev, &pm);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -191,19 +185,8 @@ static ssize_t amdgpu_set_power_dpm_state(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (is_support_sw_smu(adev)) {
> -		mutex_lock(&adev->pm.mutex);
> -		adev->pm.dpm.user_state = state;
> -		mutex_unlock(&adev->pm.mutex);
> -	} else if (adev->powerplay.pp_funcs->dispatch_tasks) {
> -		amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_ENABLE_USER_STATE, &state);
> -	} else {
> -		mutex_lock(&adev->pm.mutex);
> -		adev->pm.dpm.user_state = state;
> -		mutex_unlock(&adev->pm.mutex);
> +	amdgpu_dpm_set_power_state(adev, state);
> 
> -		amdgpu_pm_compute_clocks(adev);
> -	}
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> @@ -290,10 +273,7 @@ static ssize_t
> amdgpu_get_power_dpm_force_performance_level(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_performance_level)
> -		level = amdgpu_dpm_get_performance_level(adev);
> -	else
> -		level = adev->pm.dpm.forced_level;
> +	level = amdgpu_dpm_get_performance_level(adev);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -318,7 +298,6 @@ static ssize_t
> amdgpu_set_power_dpm_force_performance_level(struct device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	enum amd_dpm_forced_level level;
>  	enum amd_dpm_forced_level current_level;
>  	int ret = 0;
> @@ -358,11 +337,7 @@ static ssize_t
> amdgpu_set_power_dpm_force_performance_level(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (pp_funcs->get_performance_level)
> -		current_level =
> amdgpu_dpm_get_performance_level(adev);
> -	else
> -		current_level = adev->pm.dpm.forced_level;
> -
> +	current_level = amdgpu_dpm_get_performance_level(adev);
>  	if (current_level == level) {
>  		pm_runtime_mark_last_busy(ddev->dev);
>  		pm_runtime_put_autosuspend(ddev->dev);
> @@ -390,25 +365,12 @@ static ssize_t
> amdgpu_set_power_dpm_force_performance_level(struct device *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (pp_funcs->force_performance_level) {
> -		mutex_lock(&adev->pm.mutex);
> -		if (adev->pm.dpm.thermal_active) {
> -			mutex_unlock(&adev->pm.mutex);
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		}
> -		ret = amdgpu_dpm_force_performance_level(adev, level);
> -		if (ret) {
> -			mutex_unlock(&adev->pm.mutex);
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		} else {
> -			adev->pm.dpm.forced_level = level;
> -		}
> -		mutex_unlock(&adev->pm.mutex);
> +	if (amdgpu_dpm_force_performance_level(adev, level)) {
> +		pm_runtime_mark_last_busy(ddev->dev);
> +		pm_runtime_put_autosuspend(ddev->dev);
> +		return -EINVAL;
>  	}
> +
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> @@ -421,7 +383,6 @@ static ssize_t amdgpu_get_pp_num_states(struct
> device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	struct pp_states_info data;
>  	uint32_t i;
>  	int buf_len, ret;
> @@ -437,11 +398,8 @@ static ssize_t amdgpu_get_pp_num_states(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (pp_funcs->get_pp_num_states) {
> -		amdgpu_dpm_get_pp_num_states(adev, &data);
> -	} else {
> +	if (amdgpu_dpm_get_pp_num_states(adev, &data))
>  		memset(&data, 0, sizeof(data));
> -	}
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -463,7 +421,6 @@ static ssize_t amdgpu_get_pp_cur_state(struct
> device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	struct pp_states_info data = {0};
>  	enum amd_pm_state_type pm = 0;
>  	int i = 0, ret = 0;
> @@ -479,15 +436,16 @@ static ssize_t amdgpu_get_pp_cur_state(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (pp_funcs->get_current_power_state
> -		 && pp_funcs->get_pp_num_states) {
> -		pm = amdgpu_dpm_get_current_power_state(adev);
> -		amdgpu_dpm_get_pp_num_states(adev, &data);
> -	}
> +	amdgpu_dpm_get_current_power_state(adev, &pm);
> +
> +	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> +	if (ret)
> +		return ret;
> +
>  	for (i = 0; i < data.nums; i++) {
>  		if (pm == data.states[i])
>  			break;
> @@ -525,6 +483,7 @@ static ssize_t amdgpu_set_pp_force_state(struct
> device *dev,
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
>  	enum amd_pm_state_type state = 0;
> +	struct pp_states_info data;
>  	unsigned long idx;
>  	int ret;
> 
> @@ -533,41 +492,49 @@ static ssize_t amdgpu_set_pp_force_state(struct
> device *dev,
>  	if (adev->in_suspend && !adev->in_runpm)
>  		return -EPERM;
> 
> -	if (strlen(buf) == 1)
> -		adev->pp_force_state_enabled = false;
> -	else if (is_support_sw_smu(adev))
> -		adev->pp_force_state_enabled = false;
> -	else if (adev->powerplay.pp_funcs->dispatch_tasks &&
> -			adev->powerplay.pp_funcs->get_pp_num_states) {
> -		struct pp_states_info data;
> -
> -		ret = kstrtoul(buf, 0, &idx);
> -		if (ret || idx >= ARRAY_SIZE(data.states))
> -			return -EINVAL;
> +	adev->pp_force_state_enabled = false;
> 
> -		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
> +	if (strlen(buf) == 1)
> +		return count;
> 
> -		amdgpu_dpm_get_pp_num_states(adev, &data);
> -		state = data.states[idx];
> +	ret = kstrtoul(buf, 0, &idx);
> +	if (ret || idx >= ARRAY_SIZE(data.states))
> +		return -EINVAL;
> 
> -		ret = pm_runtime_get_sync(ddev->dev);
> -		if (ret < 0) {
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return ret;
> -		}
> +	idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
> 
> -		/* only set user selected power states */
> -		if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
> -		    state != POWER_STATE_TYPE_DEFAULT) {
> -			amdgpu_dpm_dispatch_task(adev,
> -
> 	AMD_PP_TASK_ENABLE_USER_STATE, &state);
> -			adev->pp_force_state_enabled = true;
> -		}
> -		pm_runtime_mark_last_busy(ddev->dev);
> +	ret = pm_runtime_get_sync(ddev->dev);
> +	if (ret < 0) {
>  		pm_runtime_put_autosuspend(ddev->dev);
> +		return ret;
> +	}
> +
> +	ret = amdgpu_dpm_get_pp_num_states(adev, &data);
> +	if (ret)
> +		goto err_out;
> +
> +	state = data.states[idx];
> +
> +	/* only set user selected power states */
> +	if (state != POWER_STATE_TYPE_INTERNAL_BOOT &&
> +	    state != POWER_STATE_TYPE_DEFAULT) {
> +		ret = amdgpu_dpm_dispatch_task(adev,
> +				AMD_PP_TASK_ENABLE_USER_STATE,
> &state);
> +		if (ret)
> +			goto err_out;
> +
> +		adev->pp_force_state_enabled = true;
>  	}
> 
> +	pm_runtime_mark_last_busy(ddev->dev);
> +	pm_runtime_put_autosuspend(ddev->dev);
> +
>  	return count;
> +
> +err_out:
> +	pm_runtime_mark_last_busy(ddev->dev);
> +	pm_runtime_put_autosuspend(ddev->dev);
> +	return ret;
>  }
> 
>  /**
> @@ -601,17 +568,13 @@ static ssize_t amdgpu_get_pp_table(struct device
> *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_pp_table) {
> -		size = amdgpu_dpm_get_pp_table(adev, &table);
> -		pm_runtime_mark_last_busy(ddev->dev);
> -		pm_runtime_put_autosuspend(ddev->dev);
> -		if (size < 0)
> -			return size;
> -	} else {
> -		pm_runtime_mark_last_busy(ddev->dev);
> -		pm_runtime_put_autosuspend(ddev->dev);
> -		return 0;
> -	}
> +	size = amdgpu_dpm_get_pp_table(adev, &table);
> +
> +	pm_runtime_mark_last_busy(ddev->dev);
> +	pm_runtime_put_autosuspend(ddev->dev);
> +
> +	if (size <= 0)
> +		return size;
> 
>  	if (size >= PAGE_SIZE)
>  		size = PAGE_SIZE - 1;
> @@ -642,15 +605,13 @@ static ssize_t amdgpu_set_pp_table(struct device
> *dev,
>  	}
> 
>  	ret = amdgpu_dpm_set_pp_table(adev, buf, count);
> -	if (ret) {
> -		pm_runtime_mark_last_busy(ddev->dev);
> -		pm_runtime_put_autosuspend(ddev->dev);
> -		return ret;
> -	}
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> +	if (ret)
> +		return ret;
> +
>  	return count;
>  }
> 
> @@ -866,46 +827,32 @@ static ssize_t
> amdgpu_set_pp_od_clk_voltage(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->set_fine_grain_clk_vol) {
> -		ret = amdgpu_dpm_set_fine_grain_clk_vol(adev, type,
> -							parameter,
> -							parameter_size);
> -		if (ret) {
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		}
> -	}
> +	if (amdgpu_dpm_set_fine_grain_clk_vol(adev,
> +					      type,
> +					      parameter,
> +					      parameter_size))
> +		goto err_out;
> 
> -	if (adev->powerplay.pp_funcs->odn_edit_dpm_table) {
> -		ret = amdgpu_dpm_odn_edit_dpm_table(adev, type,
> -						    parameter,
> parameter_size);
> -		if (ret) {
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		}
> -	}
> +	if (amdgpu_dpm_odn_edit_dpm_table(adev, type,
> +					  parameter, parameter_size))
> +		goto err_out;
> 
>  	if (type == PP_OD_COMMIT_DPM_TABLE) {
> -		if (adev->powerplay.pp_funcs->dispatch_tasks) {
> -			amdgpu_dpm_dispatch_task(adev,
> -
> AMD_PP_TASK_READJUST_POWER_STATE,
> -						 NULL);
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return count;
> -		} else {
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		}
> +		if (amdgpu_dpm_dispatch_task(adev,
> +
> AMD_PP_TASK_READJUST_POWER_STATE,
> +					     NULL))
> +			goto err_out;
>  	}
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
>  	return count;
> +
> +err_out:
> +	pm_runtime_mark_last_busy(ddev->dev);
> +	pm_runtime_put_autosuspend(ddev->dev);
> +	return -EINVAL;
>  }
> 
>  static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
> @@ -928,8 +875,8 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->print_clock_levels) {
> -		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
> +	size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
> +	if (size > 0) {
>  		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK,
> buf+size);
>  		size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDC_CURVE, buf+size);
>  		size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDGFX_OFFSET, buf+size);
> @@ -985,17 +932,14 @@ static ssize_t amdgpu_set_pp_features(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->set_ppfeature_status) {
> -		ret = amdgpu_dpm_set_ppfeature_status(adev,
> featuremask);
> -		if (ret) {
> -			pm_runtime_mark_last_busy(ddev->dev);
> -			pm_runtime_put_autosuspend(ddev->dev);
> -			return -EINVAL;
> -		}
> -	}
> +	ret = amdgpu_dpm_set_ppfeature_status(adev, featuremask);
> +
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> +	if (ret)
> +		return -EINVAL;
> +
>  	return count;
>  }
> 
> @@ -1019,9 +963,8 @@ static ssize_t amdgpu_get_pp_features(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_ppfeature_status)
> -		size = amdgpu_dpm_get_ppfeature_status(adev, buf);
> -	else
> +	size = amdgpu_dpm_get_ppfeature_status(adev, buf);
> +	if (size <= 0)
>  		size = sysfs_emit(buf, "\n");
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
> @@ -1080,9 +1023,8 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->print_clock_levels)
> -		size = amdgpu_dpm_print_clock_levels(adev, type, buf);
> -	else
> +	size = amdgpu_dpm_print_clock_levels(adev, type, buf);
> +	if (size <= 0)
>  		size = sysfs_emit(buf, "\n");
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
> @@ -1151,10 +1093,7 @@ static ssize_t amdgpu_set_pp_dpm_clock(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->force_clock_level)
> -		ret = amdgpu_dpm_force_clock_level(adev, type, mask);
> -	else
> -		ret = 0;
> +	ret = amdgpu_dpm_force_clock_level(adev, type, mask);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1305,10 +1244,7 @@ static ssize_t amdgpu_get_pp_sclk_od(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (is_support_sw_smu(adev))
> -		value = 0;
> -	else if (adev->powerplay.pp_funcs->get_sclk_od)
> -		value = amdgpu_dpm_get_sclk_od(adev);
> +	value = amdgpu_dpm_get_sclk_od(adev);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1342,19 +1278,7 @@ static ssize_t amdgpu_set_pp_sclk_od(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (is_support_sw_smu(adev)) {
> -		value = 0;
> -	} else {
> -		if (adev->powerplay.pp_funcs->set_sclk_od)
> -			amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> -
> -		if (adev->powerplay.pp_funcs->dispatch_tasks) {
> -			amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_READJUST_POWER_STATE, NULL);
> -		} else {
> -			adev->pm.dpm.current_ps = adev-
> >pm.dpm.boot_ps;
> -			amdgpu_pm_compute_clocks(adev);
> -		}
> -	}
> +	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1382,10 +1306,7 @@ static ssize_t amdgpu_get_pp_mclk_od(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (is_support_sw_smu(adev))
> -		value = 0;
> -	else if (adev->powerplay.pp_funcs->get_mclk_od)
> -		value = amdgpu_dpm_get_mclk_od(adev);
> +	value = amdgpu_dpm_get_mclk_od(adev);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1419,19 +1340,7 @@ static ssize_t amdgpu_set_pp_mclk_od(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (is_support_sw_smu(adev)) {
> -		value = 0;
> -	} else {
> -		if (adev->powerplay.pp_funcs->set_mclk_od)
> -			amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> -
> -		if (adev->powerplay.pp_funcs->dispatch_tasks) {
> -			amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_READJUST_POWER_STATE, NULL);
> -		} else {
> -			adev->pm.dpm.current_ps = adev-
> >pm.dpm.boot_ps;
> -			amdgpu_pm_compute_clocks(adev);
> -		}
> -	}
> +	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1479,9 +1388,8 @@ static ssize_t
> amdgpu_get_pp_power_profile_mode(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_power_profile_mode)
> -		size = amdgpu_dpm_get_power_profile_mode(adev, buf);
> -	else
> +	size = amdgpu_dpm_get_power_profile_mode(adev, buf);
> +	if (size <= 0)
>  		size = sysfs_emit(buf, "\n");
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
> @@ -1545,8 +1453,7 @@ static ssize_t
> amdgpu_set_pp_power_profile_mode(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->set_power_profile_mode)
> -		ret = amdgpu_dpm_set_power_profile_mode(adev,
> parameter, parameter_size);
> +	ret = amdgpu_dpm_set_power_profile_mode(adev, parameter,
> parameter_size);
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> @@ -1812,9 +1719,7 @@ static ssize_t amdgpu_get_gpu_metrics(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_gpu_metrics)
> -		size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
> -
> +	size = amdgpu_dpm_get_gpu_metrics(adev, &gpu_metrics);
>  	if (size <= 0)
>  		goto out;
> 
> @@ -2053,7 +1958,6 @@ static int default_attr_update(struct
> amdgpu_device *adev, struct amdgpu_device_
>  {
>  	struct device_attribute *dev_attr = &attr->dev_attr;
>  	const char *attr_name = dev_attr->attr.name;
> -	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
>  	enum amd_asic_type asic_type = adev->asic_type;
> 
>  	if (!(attr->flags & mask)) {
> @@ -2076,9 +1980,7 @@ static int default_attr_update(struct
> amdgpu_device *adev, struct amdgpu_device_
>  			*states = ATTR_STATE_UNSUPPORTED;
>  	} else if (DEVICE_ATTR_IS(pp_od_clk_voltage)) {
>  		*states = ATTR_STATE_UNSUPPORTED;
> -		if ((is_support_sw_smu(adev) && adev->smu.od_enabled)
> ||
> -		    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
> -			(!is_support_sw_smu(adev) && hwmgr-
> >od_enabled))
> +		if (amdgpu_dpm_is_overdrive_supported(adev))
>  			*states = ATTR_STATE_SUPPORTED;
>  	} else if (DEVICE_ATTR_IS(mem_busy_percent)) {
>  		if (adev->flags & AMD_IS_APU || asic_type == CHIP_VEGA10)
> @@ -2105,8 +2007,7 @@ static int default_attr_update(struct
> amdgpu_device *adev, struct amdgpu_device_
>  		if (!(asic_type == CHIP_VANGOGH || asic_type ==
> CHIP_SIENNA_CICHLID))
>  			*states = ATTR_STATE_UNSUPPORTED;
>  	} else if (DEVICE_ATTR_IS(pp_power_profile_mode)) {
> -		if (!adev->powerplay.pp_funcs->get_power_profile_mode
> ||
> -		    amdgpu_dpm_get_power_profile_mode(adev, NULL) == -
> EOPNOTSUPP)
> +		if (amdgpu_dpm_get_power_profile_mode(adev, NULL) ==
> -EOPNOTSUPP)
>  			*states = ATTR_STATE_UNSUPPORTED;
>  	}
> 
> @@ -2389,17 +2290,14 @@ static ssize_t
> amdgpu_hwmon_get_pwm1_enable(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -EINVAL;
> -	}
> -
> -	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
> +	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> +	if (ret)
> +		return -EINVAL;
> +
>  	return sysfs_emit(buf, "%u\n", pwm_mode);
>  }
> 
> @@ -2427,17 +2325,14 @@ static ssize_t
> amdgpu_hwmon_set_pwm1_enable(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -EINVAL;
> -	}
> -
> -	amdgpu_dpm_set_fan_control_mode(adev, value);
> +	ret = amdgpu_dpm_set_fan_control_mode(adev, value);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> +	if (ret)
> +		return -EINVAL;
> +
>  	return count;
>  }
> 
> @@ -2469,32 +2364,29 @@ static ssize_t amdgpu_hwmon_set_pwm1(struct
> device *dev,
>  	if (adev->in_suspend && !adev->in_runpm)
>  		return -EPERM;
> 
> +	err = kstrtou32(buf, 10, &value);
> +	if (err)
> +		return err;
> +
>  	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
>  	if (err < 0) {
>  		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>  		return err;
>  	}
> 
> -	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
> +	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
> +	if (err)
> +		goto out;
> +
>  	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
>  		pr_info("manual fan speed control should be enabled
> first\n");
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -EINVAL;
> +		err = -EINVAL;
> +		goto out;
>  	}
> 
> -	err = kstrtou32(buf, 10, &value);
> -	if (err) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return err;
> -	}
> -
> -	if (adev->powerplay.pp_funcs->set_fan_speed_pwm)
> -		err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_set_fan_speed_pwm(adev, value);
> 
> +out:
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> @@ -2523,10 +2415,7 @@ static ssize_t amdgpu_hwmon_get_pwm1(struct
> device *dev,
>  		return err;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_fan_speed_pwm)
> -		err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_get_fan_speed_pwm(adev, &speed);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> @@ -2556,10 +2445,7 @@ static ssize_t
> amdgpu_hwmon_get_fan1_input(struct device *dev,
>  		return err;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
> -		err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_get_fan_speed_rpm(adev, &speed);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> @@ -2653,10 +2539,7 @@ static ssize_t
> amdgpu_hwmon_get_fan1_target(struct device *dev,
>  		return err;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->get_fan_speed_rpm)
> -		err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_get_fan_speed_rpm(adev, &rpm);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> @@ -2681,32 +2564,28 @@ static ssize_t
> amdgpu_hwmon_set_fan1_target(struct device *dev,
>  	if (adev->in_suspend && !adev->in_runpm)
>  		return -EPERM;
> 
> +	err = kstrtou32(buf, 10, &value);
> +	if (err)
> +		return err;
> +
>  	err = pm_runtime_get_sync(adev_to_drm(adev)->dev);
>  	if (err < 0) {
>  		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>  		return err;
>  	}
> 
> -	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
> +	err = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
> +	if (err)
> +		goto out;
> 
>  	if (pwm_mode != AMD_FAN_CTRL_MANUAL) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -ENODATA;
> -	}
> -
> -	err = kstrtou32(buf, 10, &value);
> -	if (err) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return err;
> +		err = -ENODATA;
> +		goto out;
>  	}
> 
> -	if (adev->powerplay.pp_funcs->set_fan_speed_rpm)
> -		err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_set_fan_speed_rpm(adev, value);
> 
> +out:
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> @@ -2735,17 +2614,14 @@ static ssize_t
> amdgpu_hwmon_get_fan1_enable(struct device *dev,
>  		return ret;
>  	}
> 
> -	if (!adev->powerplay.pp_funcs->get_fan_control_mode) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -EINVAL;
> -	}
> -
> -	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
> +	ret = amdgpu_dpm_get_fan_control_mode(adev, &pwm_mode);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> +	if (ret)
> +		return -EINVAL;
> +
>  	return sysfs_emit(buf, "%i\n", pwm_mode ==
> AMD_FAN_CTRL_AUTO ? 0 : 1);
>  }
> 
> @@ -2781,16 +2657,14 @@ static ssize_t
> amdgpu_hwmon_set_fan1_enable(struct device *dev,
>  		return err;
>  	}
> 
> -	if (!adev->powerplay.pp_funcs->set_fan_control_mode) {
> -		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> -		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> -		return -EINVAL;
> -	}
> -	amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
> +	err = amdgpu_dpm_set_fan_control_mode(adev, pwm_mode);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> 
> +	if (err)
> +		return -EINVAL;
> +
>  	return count;
>  }
> 
> @@ -2926,7 +2800,6 @@ static ssize_t
> amdgpu_hwmon_show_power_cap_generic(struct device *dev,
>  					enum pp_power_limit_level
> pp_limit_level)
>  {
>  	struct amdgpu_device *adev = dev_get_drvdata(dev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	enum pp_power_type power_type = to_sensor_dev_attr(attr)-
> >index;
>  	uint32_t limit;
>  	ssize_t size;
> @@ -2937,16 +2810,13 @@ static ssize_t
> amdgpu_hwmon_show_power_cap_generic(struct device *dev,
>  	if (adev->in_suspend && !adev->in_runpm)
>  		return -EPERM;
> 
> -	if ( !(pp_funcs && pp_funcs->get_power_limit))
> -		return -ENODATA;
> -
>  	r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
>  	if (r < 0) {
>  		pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>  		return r;
>  	}
> 
> -	r = pp_funcs->get_power_limit(adev->powerplay.pp_handle, &limit,
> +	r = amdgpu_dpm_get_power_limit(adev, &limit,
>  				      pp_limit_level, power_type);
> 
>  	if (!r)
> @@ -3001,7 +2871,6 @@ static ssize_t
> amdgpu_hwmon_set_power_cap(struct device *dev,
>  		size_t count)
>  {
>  	struct amdgpu_device *adev = dev_get_drvdata(dev);
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>  	int limit_type = to_sensor_dev_attr(attr)->index;
>  	int err;
>  	u32 value;
> @@ -3027,10 +2896,7 @@ static ssize_t
> amdgpu_hwmon_set_power_cap(struct device *dev,
>  		return err;
>  	}
> 
> -	if (pp_funcs && pp_funcs->set_power_limit)
> -		err = pp_funcs->set_power_limit(adev-
> >powerplay.pp_handle, value);
> -	else
> -		err = -EINVAL;
> +	err = amdgpu_dpm_set_power_limit(adev, value);
> 
>  	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>  	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> @@ -3303,6 +3169,7 @@ static umode_t hwmon_attributes_visible(struct
> kobject *kobj,
>  	struct device *dev = kobj_to_dev(kobj);
>  	struct amdgpu_device *adev = dev_get_drvdata(dev);
>  	umode_t effective_mode = attr->mode;
> +	uint32_t speed = 0;
> 
>  	/* under multi-vf mode, the hwmon attributes are all not supported
> */
>  	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
> @@ -3367,20 +3234,18 @@ static umode_t hwmon_attributes_visible(struct
> kobject *kobj,
>  	     attr == &sensor_dev_attr_fan1_enable.dev_attr.attr))
>  		return 0;
> 
> -	if (!is_support_sw_smu(adev)) {
> -		/* mask fan attributes if we have no bindings for this asic to
> expose */
> -		if ((!adev->powerplay.pp_funcs->get_fan_speed_pwm &&
> -		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't
> query fan */
> -		    (!adev->powerplay.pp_funcs->get_fan_control_mode &&
> -		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr))
> /* can't query state */
> -			effective_mode &= ~S_IRUGO;
> +	/* mask fan attributes if we have no bindings for this asic to expose
> */
> +	if (((amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL)
> &&
> +	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't query
> fan */
> +	    ((amdgpu_dpm_get_fan_control_mode(adev, &speed) == -
> EOPNOTSUPP) &&
> +	     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't
> query state */
> +		effective_mode &= ~S_IRUGO;
> 
> -		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
> -		     attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't
> manage fan */
> -		    (!adev->powerplay.pp_funcs->set_fan_control_mode &&
> -		     attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr))
> /* can't manage state */
> -			effective_mode &= ~S_IWUSR;
> -	}
> +	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL)
> &&
> +	      attr == &sensor_dev_attr_pwm1.dev_attr.attr) || /* can't
> manage fan */
> +	      ((amdgpu_dpm_set_fan_control_mode(adev, speed) == -
> EOPNOTSUPP) &&
> +	      attr == &sensor_dev_attr_pwm1_enable.dev_attr.attr)) /* can't
> manage state */
> +		effective_mode &= ~S_IWUSR;
> 
>  	if (((adev->family == AMDGPU_FAMILY_SI) ||
>  		 ((adev->flags & AMD_IS_APU) &&
> @@ -3397,22 +3262,20 @@ static umode_t hwmon_attributes_visible(struct
> kobject *kobj,
>  	    (attr == &sensor_dev_attr_power1_average.dev_attr.attr))
>  		return 0;
> 
> -	if (!is_support_sw_smu(adev)) {
> -		/* hide max/min values if we can't both query and manage
> the fan */
> -		if ((!adev->powerplay.pp_funcs->set_fan_speed_pwm &&
> -		     !adev->powerplay.pp_funcs->get_fan_speed_pwm) &&
> -		     (!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
> -		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
> -		    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
> -		     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
> -			return 0;
> +	/* hide max/min values if we can't both query and manage the fan */
> +	if (((amdgpu_dpm_set_fan_speed_pwm(adev, speed) == -EINVAL)
> &&
> +	      (amdgpu_dpm_get_fan_speed_pwm(adev, &speed) == -EINVAL)
> &&
> +	      (amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL)
> &&
> +	      (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL))
> &&
> +	    (attr == &sensor_dev_attr_pwm1_max.dev_attr.attr ||
> +	     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
> +		return 0;
> 
> -		if ((!adev->powerplay.pp_funcs->set_fan_speed_rpm &&
> -		     !adev->powerplay.pp_funcs->get_fan_speed_rpm) &&
> -		    (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
> -		     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
> -			return 0;
> -	}
> +	if ((amdgpu_dpm_set_fan_speed_rpm(adev, speed) == -EINVAL)
> &&
> +	     (amdgpu_dpm_get_fan_speed_rpm(adev, &speed) == -EINVAL)
> &&
> +	     (attr == &sensor_dev_attr_fan1_max.dev_attr.attr ||
> +	     attr == &sensor_dev_attr_fan1_min.dev_attr.attr))
> +		return 0;
> 
>  	if ((adev->family == AMDGPU_FAMILY_SI ||	/* not implemented
> yet */
>  	     adev->family == AMDGPU_FAMILY_KV) &&	/* not implemented
> yet */
> @@ -3542,14 +3405,15 @@ static void
> amdgpu_debugfs_prints_cpu_info(struct seq_file *m,
>  	uint16_t *p_val;
>  	uint32_t size;
>  	int i;
> +	uint32_t num_cpu_cores =
> amdgpu_dpm_get_num_cpu_cores(adev);
> 
> -	if (is_support_cclk_dpm(adev)) {
> -		p_val = kcalloc(adev->smu.cpu_core_num, sizeof(uint16_t),
> +	if (amdgpu_dpm_is_cclk_dpm_supported(adev)) {
> +		p_val = kcalloc(num_cpu_cores, sizeof(uint16_t),
>  				GFP_KERNEL);
> 
>  		if (!amdgpu_dpm_read_sensor(adev,
> AMDGPU_PP_SENSOR_CPU_CLK,
>  					    (void *)p_val, &size)) {
> -			for (i = 0; i < adev->smu.cpu_core_num; i++)
> +			for (i = 0; i < num_cpu_cores; i++)
>  				seq_printf(m, "\t%u MHz (CPU%d)\n",
>  					   *(p_val + i), i);
>  		}
> @@ -3677,27 +3541,11 @@ static int
> amdgpu_debugfs_pm_info_show(struct seq_file *m, void *unused)
>  		return r;
>  	}
> 
> -	if (!adev->pm.dpm_enabled) {
> -		seq_printf(m, "dpm not enabled\n");
> -		pm_runtime_mark_last_busy(dev->dev);
> -		pm_runtime_put_autosuspend(dev->dev);
> -		return 0;
> -	}
> -
> -	if (!is_support_sw_smu(adev) &&
> -	    adev->powerplay.pp_funcs-
> >debugfs_print_current_performance_level) {
> -		mutex_lock(&adev->pm.mutex);
> -		if (adev->powerplay.pp_funcs-
> >debugfs_print_current_performance_level)
> -			adev->powerplay.pp_funcs-
> >debugfs_print_current_performance_level(adev, m);
> -		else
> -			seq_printf(m, "Debugfs support not implemented
> for this asic\n");
> -		mutex_unlock(&adev->pm.mutex);
> -		r = 0;
> -	} else {
> +	if (amdgpu_dpm_debugfs_print_current_performance_level(adev,
> m)) {
>  		r = amdgpu_debugfs_pm_info_pp(m, adev);
> +		if (r)
> +			goto out;
>  	}
> -	if (r)
> -		goto out;
> 
>  	amdgpu_device_ip_get_clockgating_state(adev, &flags);
> 
> @@ -3723,21 +3571,18 @@ static ssize_t
> amdgpu_pm_prv_buffer_read(struct file *f, char __user *buf,
>  					 size_t size, loff_t *pos)
>  {
>  	struct amdgpu_device *adev = file_inode(f)->i_private;
> -	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> -	void *pp_handle = adev->powerplay.pp_handle;
>  	size_t smu_prv_buf_size;
>  	void *smu_prv_buf;
> +	int ret = 0;
> 
>  	if (amdgpu_in_reset(adev))
>  		return -EPERM;
>  	if (adev->in_suspend && !adev->in_runpm)
>  		return -EPERM;
> 
> -	if (pp_funcs && pp_funcs->get_smu_prv_buf_details)
> -		pp_funcs->get_smu_prv_buf_details(pp_handle,
> &smu_prv_buf,
> -						  &smu_prv_buf_size);
> -	else
> -		return -ENOSYS;
> +	ret = amdgpu_dpm_get_smu_prv_buf_details(adev, &smu_prv_buf,
> &smu_prv_buf_size);
> +	if (ret)
> +		return ret;
> 
>  	if (!smu_prv_buf || !smu_prv_buf_size)
>  		return -EINVAL;
> @@ -3770,6 +3615,6 @@ void amdgpu_debugfs_pm_init(struct
> amdgpu_device *adev)
> 
> &amdgpu_debugfs_pm_prv_buffer_fops,
>  					 adev->pm.smu_prv_buffer_size);
> 
> -	amdgpu_smu_stb_debug_fs_init(adev);
> +	amdgpu_dpm_stb_debug_fs_init(adev);
>  #endif
>  }
> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> index 7289d379a9fb..039c40b1d0cb 100644
> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> @@ -262,9 +262,6 @@ enum amdgpu_pcie_gen {
>  #define amdgpu_dpm_pre_set_power_state(adev) \
>  		((adev)->powerplay.pp_funcs-
> >pre_set_power_state((adev)->powerplay.pp_handle))
> 
> -#define amdgpu_dpm_set_power_state(adev) \
> -		((adev)->powerplay.pp_funcs->set_power_state((adev)-
> >powerplay.pp_handle))
> -
>  #define amdgpu_dpm_post_set_power_state(adev) \
>  		((adev)->powerplay.pp_funcs-
> >post_set_power_state((adev)->powerplay.pp_handle))
> 
> @@ -280,100 +277,13 @@ enum amdgpu_pcie_gen {
>  #define amdgpu_dpm_enable_bapm(adev, e) \
>  		((adev)->powerplay.pp_funcs->enable_bapm((adev)-
> >powerplay.pp_handle, (e)))
> 
> -#define amdgpu_dpm_set_fan_control_mode(adev, m) \
> -		((adev)->powerplay.pp_funcs-
> >set_fan_control_mode((adev)->powerplay.pp_handle, (m)))
> -
> -#define amdgpu_dpm_get_fan_control_mode(adev) \
> -		((adev)->powerplay.pp_funcs-
> >get_fan_control_mode((adev)->powerplay.pp_handle))
> -
> -#define amdgpu_dpm_set_fan_speed_pwm(adev, s) \
> -		((adev)->powerplay.pp_funcs-
> >set_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
> -
> -#define amdgpu_dpm_get_fan_speed_pwm(adev, s) \
> -		((adev)->powerplay.pp_funcs-
> >get_fan_speed_pwm((adev)->powerplay.pp_handle, (s)))
> -
> -#define amdgpu_dpm_get_fan_speed_rpm(adev, s) \
> -		((adev)->powerplay.pp_funcs-
> >get_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
> -
> -#define amdgpu_dpm_set_fan_speed_rpm(adev, s) \
> -		((adev)->powerplay.pp_funcs-
> >set_fan_speed_rpm)((adev)->powerplay.pp_handle, (s))
> -
> -#define amdgpu_dpm_force_performance_level(adev, l) \
> -		((adev)->powerplay.pp_funcs-
> >force_performance_level((adev)->powerplay.pp_handle, (l)))
> -
> -#define amdgpu_dpm_get_current_power_state(adev) \
> -		((adev)->powerplay.pp_funcs-
> >get_current_power_state((adev)->powerplay.pp_handle))
> -
> -#define amdgpu_dpm_get_pp_num_states(adev, data) \
> -		((adev)->powerplay.pp_funcs->get_pp_num_states((adev)-
> >powerplay.pp_handle, data))
> -
> -#define amdgpu_dpm_get_pp_table(adev, table) \
> -		((adev)->powerplay.pp_funcs->get_pp_table((adev)-
> >powerplay.pp_handle, table))
> -
> -#define amdgpu_dpm_set_pp_table(adev, buf, size) \
> -		((adev)->powerplay.pp_funcs->set_pp_table((adev)-
> >powerplay.pp_handle, buf, size))
> -
> -#define amdgpu_dpm_print_clock_levels(adev, type, buf) \
> -		((adev)->powerplay.pp_funcs->print_clock_levels((adev)-
> >powerplay.pp_handle, type, buf))
> -
> -#define amdgpu_dpm_force_clock_level(adev, type, level) \
> -		((adev)->powerplay.pp_funcs->force_clock_level((adev)-
> >powerplay.pp_handle, type, level))
> -
> -#define amdgpu_dpm_get_sclk_od(adev) \
> -		((adev)->powerplay.pp_funcs->get_sclk_od((adev)-
> >powerplay.pp_handle))
> -
> -#define amdgpu_dpm_set_sclk_od(adev, value) \
> -		((adev)->powerplay.pp_funcs->set_sclk_od((adev)-
> >powerplay.pp_handle, value))
> -
> -#define amdgpu_dpm_get_mclk_od(adev) \
> -		((adev)->powerplay.pp_funcs->get_mclk_od((adev)-
> >powerplay.pp_handle))
> -
> -#define amdgpu_dpm_set_mclk_od(adev, value) \
> -		((adev)->powerplay.pp_funcs->set_mclk_od((adev)-
> >powerplay.pp_handle, value))
> -
> -#define amdgpu_dpm_dispatch_task(adev, task_id, user_state)
> 	\
> -		((adev)->powerplay.pp_funcs->dispatch_tasks)((adev)-
> >powerplay.pp_handle, (task_id), (user_state))
> -
>  #define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
>  		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
> >powerplay.pp_handle, (cps), (rps), (equal)))
> 
> -#define amdgpu_dpm_get_vce_clock_state(adev, i)
> 	\
> -		((adev)->powerplay.pp_funcs->get_vce_clock_state((adev)-
> >powerplay.pp_handle, (i)))
> -
> -#define amdgpu_dpm_get_performance_level(adev)
> 		\
> -		((adev)->powerplay.pp_funcs-
> >get_performance_level((adev)->powerplay.pp_handle))
> -
>  #define amdgpu_dpm_reset_power_profile_state(adev, request) \
>  		((adev)->powerplay.pp_funcs-
> >reset_power_profile_state(\
>  			(adev)->powerplay.pp_handle, request))
> 
> -#define amdgpu_dpm_get_power_profile_mode(adev, buf) \
> -		((adev)->powerplay.pp_funcs->get_power_profile_mode(\
> -			(adev)->powerplay.pp_handle, buf))
> -
> -#define amdgpu_dpm_set_power_profile_mode(adev, parameter, size) \
> -		((adev)->powerplay.pp_funcs->set_power_profile_mode(\
> -			(adev)->powerplay.pp_handle, parameter, size))
> -
> -#define amdgpu_dpm_set_fine_grain_clk_vol(adev, type, parameter, size)
> \
> -		((adev)->powerplay.pp_funcs->set_fine_grain_clk_vol(\
> -			(adev)->powerplay.pp_handle, type, parameter,
> size))
> -
> -#define amdgpu_dpm_odn_edit_dpm_table(adev, type, parameter, size) \
> -		((adev)->powerplay.pp_funcs->odn_edit_dpm_table(\
> -			(adev)->powerplay.pp_handle, type, parameter,
> size))
> -
> -#define amdgpu_dpm_get_ppfeature_status(adev, buf) \
> -		((adev)->powerplay.pp_funcs->get_ppfeature_status(\
> -			(adev)->powerplay.pp_handle, (buf)))
> -
> -#define amdgpu_dpm_set_ppfeature_status(adev, ppfeatures) \
> -		((adev)->powerplay.pp_funcs->set_ppfeature_status(\
> -			(adev)->powerplay.pp_handle, (ppfeatures)))
> -
> -#define amdgpu_dpm_get_gpu_metrics(adev, table) \
> -		((adev)->powerplay.pp_funcs->get_gpu_metrics((adev)-
> >powerplay.pp_handle, table))
> -
>  struct amdgpu_dpm {
>  	struct amdgpu_ps        *ps;
>  	/* number of valid power states */
> @@ -598,4 +508,74 @@ void amdgpu_dpm_gfx_state_change(struct
> amdgpu_device *adev,
>  				 enum gfx_change_state state);
>  int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>  			    void *umc_ecc);
> +struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct
> amdgpu_device *adev,
> +						     uint32_t idx);
> +void amdgpu_dpm_get_current_power_state(struct amdgpu_device
> *adev, enum amd_pm_state_type *state);
> +void amdgpu_dpm_set_power_state(struct amdgpu_device *adev,
> +				enum amd_pm_state_type state);
> +enum amd_dpm_forced_level
> amdgpu_dpm_get_performance_level(struct amdgpu_device *adev);
> +int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
> +				       enum amd_dpm_forced_level level);
> +int amdgpu_dpm_get_pp_num_states(struct amdgpu_device *adev,
> +				 struct pp_states_info *states);
> +int amdgpu_dpm_dispatch_task(struct amdgpu_device *adev,
> +			      enum amd_pp_task task_id,
> +			      enum amd_pm_state_type *user_state);
> +int amdgpu_dpm_get_pp_table(struct amdgpu_device *adev, char
> **table);
> +int amdgpu_dpm_set_fine_grain_clk_vol(struct amdgpu_device *adev,
> +				      uint32_t type,
> +				      long *input,
> +				      uint32_t size);
> +int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
> +				  uint32_t type,
> +				  long *input,
> +				  uint32_t size);
> +int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
> +				  enum pp_clock_type type,
> +				  char *buf);
> +int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
> +				    uint64_t ppfeature_masks);
> +int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev,
> char *buf);
> +int amdgpu_dpm_force_clock_level(struct amdgpu_device *adev,
> +				 enum pp_clock_type type,
> +				 uint32_t mask);
> +int amdgpu_dpm_get_sclk_od(struct amdgpu_device *adev);
> +int amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t
> value);
> +int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev);
> +int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev, uint32_t
> value);
> +int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device *adev,
> +				      char *buf);
> +int amdgpu_dpm_set_power_profile_mode(struct amdgpu_device *adev,
> +				      long *input, uint32_t size);
> +int amdgpu_dpm_get_gpu_metrics(struct amdgpu_device *adev, void
> **table);
> +int amdgpu_dpm_get_fan_control_mode(struct amdgpu_device *adev,
> +				    uint32_t *fan_mode);
> +int amdgpu_dpm_set_fan_speed_pwm(struct amdgpu_device *adev,
> +				 uint32_t speed);
> +int amdgpu_dpm_get_fan_speed_pwm(struct amdgpu_device *adev,
> +				 uint32_t *speed);
> +int amdgpu_dpm_get_fan_speed_rpm(struct amdgpu_device *adev,
> +				 uint32_t *speed);
> +int amdgpu_dpm_set_fan_speed_rpm(struct amdgpu_device *adev,
> +				 uint32_t speed);
> +int amdgpu_dpm_set_fan_control_mode(struct amdgpu_device *adev,
> +				    uint32_t mode);
> +int amdgpu_dpm_get_power_limit(struct amdgpu_device *adev,
> +			       uint32_t *limit,
> +			       enum pp_power_limit_level pp_limit_level,
> +			       enum pp_power_type power_type);
> +int amdgpu_dpm_set_power_limit(struct amdgpu_device *adev,
> +			       uint32_t limit);
> +int amdgpu_dpm_is_cclk_dpm_supported(struct amdgpu_device *adev);
> +int amdgpu_dpm_debugfs_print_current_performance_level(struct
> amdgpu_device *adev,
> +						       struct seq_file *m);
> +int amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
> +				       void **addr,
> +				       size_t *size);
> +int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev);
> +int amdgpu_dpm_set_pp_table(struct amdgpu_device *adev,
> +			    const char *buf,
> +			    size_t size);
> +int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev);
> +void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev);
>  #endif
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> index ef7d0e377965..eaed5aba7547 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> @@ -470,9 +470,6 @@ bool is_support_cclk_dpm(struct amdgpu_device
> *adev)
>  {
>  	struct smu_context *smu = &adev->smu;
> 
> -	if (!is_support_sw_smu(adev))
> -		return false;
> -
>  	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
>  		return false;
> 
> --
> 2.29.0

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c
  2021-11-30 12:22   ` Lazar, Lijo
@ 2021-12-01  2:07     ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  2:07 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Tuesday, November 30, 2021 8:22 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used
> internally only in si_dpm.c
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > Move them to si_dpm.c instead.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: I288205cfd7c6ba09cfb22626ff70360d61ff0c67
> > --
> > v1->v2:
> >    - rename the API with "si_" prefix(Alex)
> > ---
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 25 -----------
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   | 25 -----------
> >   drivers/gpu/drm/amd/pm/powerplay/si_dpm.c | 54
> +++++++++++++++++++----
> >   drivers/gpu/drm/amd/pm/powerplay/si_dpm.h |  7 +++
> >   4 files changed, 53 insertions(+), 58 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index 52ac3c883a6e..fbfc07a83122 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -894,31 +894,6 @@ void amdgpu_add_thermal_controller(struct
> amdgpu_device *adev)
> >   	}
> >   }
> >
> > -enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct
> amdgpu_device *adev,
> > -						 u32 sys_mask,
> > -						 enum amdgpu_pcie_gen
> asic_gen,
> > -						 enum amdgpu_pcie_gen
> default_gen)
> > -{
> > -	switch (asic_gen) {
> > -	case AMDGPU_PCIE_GEN1:
> > -		return AMDGPU_PCIE_GEN1;
> > -	case AMDGPU_PCIE_GEN2:
> > -		return AMDGPU_PCIE_GEN2;
> > -	case AMDGPU_PCIE_GEN3:
> > -		return AMDGPU_PCIE_GEN3;
> > -	default:
> > -		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
> &&
> > -		    (default_gen == AMDGPU_PCIE_GEN3))
> > -			return AMDGPU_PCIE_GEN3;
> > -		else if ((sys_mask &
> CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
> > -			 (default_gen == AMDGPU_PCIE_GEN2))
> > -			return AMDGPU_PCIE_GEN2;
> > -		else
> > -			return AMDGPU_PCIE_GEN1;
> > -	}
> > -	return AMDGPU_PCIE_GEN1;
> > -}
> > -
> >   struct amd_vce_state*
> >   amdgpu_get_vce_clock_state(void *handle, u32 idx)
> >   {
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > index 6681b878e75f..f43b96dfe9d8 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > @@ -45,19 +45,6 @@ enum amdgpu_int_thermal_type {
> >   	THERMAL_TYPE_KV,
> >   };
> >
> > -enum amdgpu_dpm_auto_throttle_src {
> > -	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
> > -	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
> > -};
> > -
> > -enum amdgpu_dpm_event_src {
> > -	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
> > -	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
> > -	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
> > -	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
> > -	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4
> > -};
> > -
> >   struct amdgpu_ps {
> >   	u32 caps; /* vbios flags */
> >   	u32 class; /* vbios flags */
> > @@ -252,13 +239,6 @@ struct amdgpu_dpm_fan {
> >   	bool ucode_fan_control;
> >   };
> >
> > -enum amdgpu_pcie_gen {
> > -	AMDGPU_PCIE_GEN1 = 0,
> > -	AMDGPU_PCIE_GEN2 = 1,
> > -	AMDGPU_PCIE_GEN3 = 2,
> > -	AMDGPU_PCIE_GEN_INVALID = 0xffff
> > -};
> > -
> >   #define amdgpu_dpm_reset_power_profile_state(adev, request) \
> >   		((adev)->powerplay.pp_funcs-
> >reset_power_profile_state(\
> >   			(adev)->powerplay.pp_handle, request)) @@ -
> 403,11 +383,6 @@ void
> > amdgpu_free_extended_power_table(struct amdgpu_device *adev);
> >
> >   void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> >
> > -enum amdgpu_pcie_gen amdgpu_get_pcie_gen_support(struct
> amdgpu_device *adev,
> > -						 u32 sys_mask,
> > -						 enum amdgpu_pcie_gen
> asic_gen,
> > -						 enum amdgpu_pcie_gen
> default_gen);
> > -
> >   struct amd_vce_state*
> >   amdgpu_get_vce_clock_state(void *handle, u32 idx);
> >
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > index 81f82aa05ec2..4f84d8b893f1 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > @@ -96,6 +96,19 @@ union pplib_clock_info {
> >   	struct _ATOM_PPLIB_SI_CLOCK_INFO si;
> >   };
> >
> > +enum amdgpu_dpm_auto_throttle_src {
> > +	AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL,
> > +	AMDGPU_DPM_AUTO_THROTTLE_SRC_EXTERNAL
> > +};
> > +
> > +enum amdgpu_dpm_event_src {
> > +	AMDGPU_DPM_EVENT_SRC_ANALOG = 0,
> > +	AMDGPU_DPM_EVENT_SRC_EXTERNAL = 1,
> > +	AMDGPU_DPM_EVENT_SRC_DIGITAL = 2,
> > +	AMDGPU_DPM_EVENT_SRC_ANALOG_OR_EXTERNAL = 3,
> > +	AMDGPU_DPM_EVENT_SRC_DIGIAL_OR_EXTERNAL = 4 };
> > +
> 
> Better to rename the enums also including amdgpu_pcie_gen if they are
> used only within si_dpm.
[Quan, Evan] Sure, will do that.

Thanks
Evan
> 
> Thanks,
> Lijo
> 
> >   static const u32 r600_utc[R600_PM_NUMBER_OF_TC] =
> >   {
> >   	R600_UTC_DFLT_00,
> > @@ -4927,6 +4940,31 @@ static int si_populate_smc_initial_state(struct
> amdgpu_device *adev,
> >   	return 0;
> >   }
> >
> > +static enum amdgpu_pcie_gen si_gen_pcie_gen_support(struct
> amdgpu_device *adev,
> > +						    u32 sys_mask,
> > +						    enum amdgpu_pcie_gen
> asic_gen,
> > +						    enum amdgpu_pcie_gen
> default_gen) {
> > +	switch (asic_gen) {
> > +	case AMDGPU_PCIE_GEN1:
> > +		return AMDGPU_PCIE_GEN1;
> > +	case AMDGPU_PCIE_GEN2:
> > +		return AMDGPU_PCIE_GEN2;
> > +	case AMDGPU_PCIE_GEN3:
> > +		return AMDGPU_PCIE_GEN3;
> > +	default:
> > +		if ((sys_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
> &&
> > +		    (default_gen == AMDGPU_PCIE_GEN3))
> > +			return AMDGPU_PCIE_GEN3;
> > +		else if ((sys_mask &
> CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) &&
> > +			 (default_gen == AMDGPU_PCIE_GEN2))
> > +			return AMDGPU_PCIE_GEN2;
> > +		else
> > +			return AMDGPU_PCIE_GEN1;
> > +	}
> > +	return AMDGPU_PCIE_GEN1;
> > +}
> > +
> >   static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
> >   				      SISLANDS_SMC_STATETABLE *table)
> >   {
> > @@ -4989,10 +5027,10 @@ static int si_populate_smc_acpi_state(struct
> amdgpu_device *adev,
> >   							      &table-
> >ACPIState.level.std_vddc);
> >   		}
> >   		table->ACPIState.level.gen2PCIE =
> > -			(u8)amdgpu_get_pcie_gen_support(adev,
> > -							si_pi->sys_pcie_mask,
> > -							si_pi->boot_pcie_gen,
> > -
> 	AMDGPU_PCIE_GEN1);
> > +			(u8)si_gen_pcie_gen_support(adev,
> > +						    si_pi->sys_pcie_mask,
> > +						    si_pi->boot_pcie_gen,
> > +						    AMDGPU_PCIE_GEN1);
> >
> >   		if (si_pi->vddc_phase_shed_control)
> >   			si_populate_phase_shedding_value(adev,
> > @@ -7148,10 +7186,10 @@ static void si_parse_pplib_clock_info(struct
> amdgpu_device *adev,
> >   	pl->vddc = le16_to_cpu(clock_info->si.usVDDC);
> >   	pl->vddci = le16_to_cpu(clock_info->si.usVDDCI);
> >   	pl->flags = le32_to_cpu(clock_info->si.ulFlags);
> > -	pl->pcie_gen = amdgpu_get_pcie_gen_support(adev,
> > -						   si_pi->sys_pcie_mask,
> > -						   si_pi->boot_pcie_gen,
> > -						   clock_info->si.ucPCIEGen);
> > +	pl->pcie_gen = si_gen_pcie_gen_support(adev,
> > +					       si_pi->sys_pcie_mask,
> > +					       si_pi->boot_pcie_gen,
> > +					       clock_info->si.ucPCIEGen);
> >
> >   	/* patch up vddc if necessary */
> >   	ret = si_get_leakage_voltage_from_leakage_index(adev, pl->vddc,
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > index bc0be6818e21..8c267682eeef 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > @@ -595,6 +595,13 @@ struct rv7xx_power_info {
> >   	RV770_SMC_STATETABLE smc_statetable;
> >   };
> >
> > +enum amdgpu_pcie_gen {
> > +	AMDGPU_PCIE_GEN1 = 0,
> > +	AMDGPU_PCIE_GEN2 = 1,
> > +	AMDGPU_PCIE_GEN3 = 2,
> > +	AMDGPU_PCIE_GEN_INVALID = 0xffff
> > +};
> > +
> >   struct rv7xx_pl {
> >   	u32 sclk;
> >   	u32 mclk;
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c
  2021-11-30 12:27   ` Lazar, Lijo
@ 2021-12-01  2:47     ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  2:47 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of
> Lazar, Lijo
> Sent: Tuesday, November 30, 2021 8:28 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Feng, Kenneth
> <Kenneth.Feng@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
> Subject: Re: [PATCH V2 06/17] drm/amd/pm: do not expose the API used
> internally only in kv_dpm.c
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > Move it to kv_dpm.c instead.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: I554332b386491a79b7913f72786f1e2cb1f8165b
> > --
> > v1->v2:
> >    - rename the API with "kv_" prefix(Alex)
> > ---
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c       | 23 ---------------------
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h   |  2 --
> >   drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c | 25
> ++++++++++++++++++++++-
> >   3 files changed, 24 insertions(+), 26 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index fbfc07a83122..ecaf0081bc31 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -209,29 +209,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct
> amdgpu_device *adev)
> >   	return vrefresh;
> >   }
> >
> > -bool amdgpu_is_internal_thermal_sensor(enum
> amdgpu_int_thermal_type
> > sensor) -{
> > -	switch (sensor) {
> > -	case THERMAL_TYPE_RV6XX:
> > -	case THERMAL_TYPE_RV770:
> > -	case THERMAL_TYPE_EVERGREEN:
> > -	case THERMAL_TYPE_SUMO:
> > -	case THERMAL_TYPE_NI:
> > -	case THERMAL_TYPE_SI:
> > -	case THERMAL_TYPE_CI:
> > -	case THERMAL_TYPE_KV:
> > -		return true;
> > -	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
> > -	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
> > -		return false; /* need special handling */
> > -	case THERMAL_TYPE_NONE:
> > -	case THERMAL_TYPE_EXTERNAL:
> > -	case THERMAL_TYPE_EXTERNAL_GPIO:
> > -	default:
> > -		return false;
> > -	}
> > -}
> > -
> >   union power_info {
> >   	struct _ATOM_POWERPLAY_INFO info;
> >   	struct _ATOM_POWERPLAY_INFO_V2 info_2; diff --git
> > a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > index f43b96dfe9d8..01120b302590 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > @@ -374,8 +374,6 @@ u32 amdgpu_dpm_get_vblank_time(struct
> amdgpu_device *adev);
> >   int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
> amd_pp_sensors sensor,
> >   			   void *data, uint32_t *size);
> >
> > -bool amdgpu_is_internal_thermal_sensor(enum
> amdgpu_int_thermal_type
> > sensor);
> > -
> >   int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> >
> >   int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > index bcae42cef374..380a5336c74f 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > @@ -1256,6 +1256,29 @@ static void kv_dpm_enable_bapm(void *handle,
> bool enable)
> >   	}
> >   }
> >
> > +static bool kv_is_internal_thermal_sensor(enum
> > +amdgpu_int_thermal_type sensor) {
> > +	switch (sensor) {
> > +	case THERMAL_TYPE_RV6XX:
> > +	case THERMAL_TYPE_RV770:
> > +	case THERMAL_TYPE_EVERGREEN:
> > +	case THERMAL_TYPE_SUMO:
> > +	case THERMAL_TYPE_NI:
> > +	case THERMAL_TYPE_SI:
> > +	case THERMAL_TYPE_CI:
> > +	case THERMAL_TYPE_KV:
> > +		return true;
> > +	case THERMAL_TYPE_ADT7473_WITH_INTERNAL:
> > +	case THERMAL_TYPE_EMC2103_WITH_INTERNAL:
> > +		return false; /* need special handling */
> > +	case THERMAL_TYPE_NONE:
> > +	case THERMAL_TYPE_EXTERNAL:
> > +	case THERMAL_TYPE_EXTERNAL_GPIO:
> > +	default:
> > +		return false;
> > +	}
> > +}
> 
> All these names don't look like KV specific. Remove the family specifc ones
> like RV, SI, NI, CI etc., and keep KV and the generic ones like
> GPIO/EXTERNAL/NONE. Don't see a chance of external diodes being used for
> KV.
[Quan, Evan] Make sense. I will create another patch to follow this.
Let's keep minimum change here.

Thanks,
Evan
> 
> Thanks,
> Lijo
> 
> > +
> >   static int kv_dpm_enable(struct amdgpu_device *adev)
> >   {
> >   	struct kv_power_info *pi = kv_get_pi(adev); @@ -1352,7 +1375,7
> @@
> > static int kv_dpm_enable(struct amdgpu_device *adev)
> >   	}
> >
> >   	if (adev->irq.installed &&
> > -	    amdgpu_is_internal_thermal_sensor(adev-
> >pm.int_thermal_type)) {
> > +	    kv_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
> >   		ret = kv_set_thermal_temperature_range(adev,
> KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
> >   		if (ret) {
> >   			DRM_ERROR("kv_set_thermal_temperature_range
> failed\n");
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-11-30 13:21   ` Lazar, Lijo
@ 2021-12-01  3:13     ` Quan, Evan
  2021-12-01  4:19       ` Lazar, Lijo
  0 siblings, 1 reply; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  3:13 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Tuesday, November 30, 2021 9:21 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those
> APIs used only by legacy ASICs(si/kv)
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > Those APIs are used only by legacy ASICs(si/kv). They cannot be
> > shared by other ASICs. So, we create a new holder for them.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
> > --
> > v1->v2:
> >    - move other APIs used by si/kv in amdgpu_atombios.c to the new
> >      holder also(Alex)
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
> >   .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
> >   drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
> >   drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
> >   drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453
> +++++++++++++++++
> >   drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
> >   drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
> >   10 files changed, 1534 insertions(+), 1470 deletions(-)
> >   create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >   create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> > index 12a6b1c99c93..f2e447212e62 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> > @@ -1083,427 +1083,6 @@ int
> amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
> >   	return 0;
> >   }
> >
> > -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
> *adev,
> > -					    u32 clock,
> > -					    bool strobe_mode,
> > -					    struct atom_mpll_param
> *mpll_param)
> > -{
> > -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> > -	int index = GetIndexIntoMasterTable(COMMAND,
> ComputeMemoryClockParam);
> > -	u8 frev, crev;
> > -
> > -	memset(&args, 0, sizeof(args));
> > -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> > -
> > -	if (!amdgpu_atom_parse_cmd_header(adev-
> >mode_info.atom_context, index, &frev, &crev))
> > -		return -EINVAL;
> > -
> > -	switch (frev) {
> > -	case 2:
> > -		switch (crev) {
> > -		case 1:
> > -			/* SI */
> > -			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> > -			args.ucInputFlag = 0;
> > -			if (strobe_mode)
> > -				args.ucInputFlag |=
> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> > -
> > -			amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > -
> > -			mpll_param->clkfrac =
> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> > -			mpll_param->clkf =
> le16_to_cpu(args.ulFbDiv.usFbDiv);
> > -			mpll_param->post_div = args.ucPostDiv;
> > -			mpll_param->dll_speed = args.ucDllSpeed;
> > -			mpll_param->bwcntl = args.ucBWCntl;
> > -			mpll_param->vco_mode =
> > -				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> > -			mpll_param->yclk_sel =
> > -				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> > -			mpll_param->qdr =
> > -				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> > -			mpll_param->half_rate =
> > -				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> > -			break;
> > -		default:
> > -			return -EINVAL;
> > -		}
> > -		break;
> > -	default:
> > -		return -EINVAL;
> > -	}
> > -	return 0;
> > -}
> > -
> > -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device
> *adev,
> > -					     u32 eng_clock, u32 mem_clock)
> > -{
> > -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> > -	int index = GetIndexIntoMasterTable(COMMAND,
> DynamicMemorySettings);
> > -	u32 tmp;
> > -
> > -	memset(&args, 0, sizeof(args));
> > -
> > -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> > -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> > -
> > -	args.ulTargetEngineClock = cpu_to_le32(tmp);
> > -	if (mem_clock)
> > -		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> SET_CLOCK_FREQ_MASK);
> > -
> > -	amdgpu_atom_execute_table(adev->mode_info.atom_context,
> index, (uint32_t *)&args);
> > -}
> > -
> > -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> *adev,
> > -					  u16 *vddc, u16 *vddci, u16 *mvdd)
> > -{
> > -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> > -	u8 frev, crev;
> > -	u16 data_offset;
> > -	union firmware_info *firmware_info;
> > -
> > -	*vddc = 0;
> > -	*vddci = 0;
> > -	*mvdd = 0;
> > -
> > -	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > -				   &frev, &crev, &data_offset)) {
> > -		firmware_info =
> > -			(union firmware_info *)(mode_info->atom_context-
> >bios +
> > -						data_offset);
> > -		*vddc = le16_to_cpu(firmware_info-
> >info_14.usBootUpVDDCVoltage);
> > -		if ((frev == 2) && (crev >= 2)) {
> > -			*vddci = le16_to_cpu(firmware_info-
> >info_22.usBootUpVDDCIVoltage);
> > -			*mvdd = le16_to_cpu(firmware_info-
> >info_22.usBootUpMVDDCVoltage);
> > -		}
> > -	}
> > -}
> > -
> > -union set_voltage {
> > -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> > -	struct _SET_VOLTAGE_PARAMETERS v1;
> > -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> > -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> > -};
> > -
> > -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> voltage_type,
> > -			     u16 voltage_id, u16 *voltage)
> > -{
> > -	union set_voltage args;
> > -	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> > -	u8 frev, crev;
> > -
> > -	if (!amdgpu_atom_parse_cmd_header(adev-
> >mode_info.atom_context, index, &frev, &crev))
> > -		return -EINVAL;
> > -
> > -	switch (crev) {
> > -	case 1:
> > -		return -EINVAL;
> > -	case 2:
> > -		args.v2.ucVoltageType =
> SET_VOLTAGE_GET_MAX_VOLTAGE;
> > -		args.v2.ucVoltageMode = 0;
> > -		args.v2.usVoltageLevel = 0;
> > -
> > -		amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > -
> > -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> > -		break;
> > -	case 3:
> > -		args.v3.ucVoltageType = voltage_type;
> > -		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> > -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> > -
> > -		amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > -
> > -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> > -		break;
> > -	default:
> > -		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> > -		return -EINVAL;
> > -	}
> > -
> > -	return 0;
> > -}
> > -
> > -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> amdgpu_device *adev,
> > -						      u16 *voltage,
> > -						      u16 leakage_idx)
> > -{
> > -	return amdgpu_atombios_get_max_vddc(adev,
> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> > -}
> > -
> > -union voltage_object_info {
> > -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> > -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> > -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> > -};
> > -
> > -union voltage_object {
> > -	struct _ATOM_VOLTAGE_OBJECT v1;
> > -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> > -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> > -};
> > -
> > -
> > -static ATOM_VOLTAGE_OBJECT_V3
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> NFO_V3_1 *v3,
> > -									u8
> voltage_type, u8 voltage_mode)
> > -{
> > -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> > -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> asVoltageObj[0]);
> > -	u8 *start = (u8 *)v3;
> > -
> > -	while (offset < size) {
> > -		ATOM_VOLTAGE_OBJECT_V3 *vo =
> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> > -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
> voltage_type) &&
> > -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
> voltage_mode))
> > -			return vo;
> > -		offset += le16_to_cpu(vo-
> >asGpioVoltageObj.sHeader.usSize);
> > -	}
> > -	return NULL;
> > -}
> > -
> > -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> > -			      u8 voltage_type,
> > -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> > -{
> > -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > -	u8 frev, crev;
> > -	u16 data_offset, size;
> > -	union voltage_object_info *voltage_info;
> > -	union voltage_object *voltage_object = NULL;
> > -
> > -	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > -				   &frev, &crev, &data_offset)) {
> > -		voltage_info = (union voltage_object_info *)
> > -			(adev->mode_info.atom_context->bios +
> data_offset);
> > -
> > -		switch (frev) {
> > -		case 3:
> > -			switch (crev) {
> > -			case 1:
> > -				voltage_object = (union voltage_object *)
> > -
> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > -
> voltage_type,
> > -
> VOLTAGE_OBJ_SVID2);
> > -				if (voltage_object) {
> > -					*svd_gpio_id = voltage_object-
> >v3.asSVID2Obj.ucSVDGpioId;
> > -					*svc_gpio_id = voltage_object-
> >v3.asSVID2Obj.ucSVCGpioId;
> > -				} else {
> > -					return -EINVAL;
> > -				}
> > -				break;
> > -			default:
> > -				DRM_ERROR("unknown voltage object
> table\n");
> > -				return -EINVAL;
> > -			}
> > -			break;
> > -		default:
> > -			DRM_ERROR("unknown voltage object table\n");
> > -			return -EINVAL;
> > -		}
> > -
> > -	}
> > -	return 0;
> > -}
> > -
> > -bool
> > -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> > -				u8 voltage_type, u8 voltage_mode)
> > -{
> > -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > -	u8 frev, crev;
> > -	u16 data_offset, size;
> > -	union voltage_object_info *voltage_info;
> > -
> > -	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > -				   &frev, &crev, &data_offset)) {
> > -		voltage_info = (union voltage_object_info *)
> > -			(adev->mode_info.atom_context->bios +
> data_offset);
> > -
> > -		switch (frev) {
> > -		case 3:
> > -			switch (crev) {
> > -			case 1:
> > -				if
> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > -
> voltage_type, voltage_mode))
> > -					return true;
> > -				break;
> > -			default:
> > -				DRM_ERROR("unknown voltage object
> table\n");
> > -				return false;
> > -			}
> > -			break;
> > -		default:
> > -			DRM_ERROR("unknown voltage object table\n");
> > -			return false;
> > -		}
> > -
> > -	}
> > -	return false;
> > -}
> > -
> > -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> > -				      u8 voltage_type, u8 voltage_mode,
> > -				      struct atom_voltage_table *voltage_table)
> > -{
> > -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > -	u8 frev, crev;
> > -	u16 data_offset, size;
> > -	int i;
> > -	union voltage_object_info *voltage_info;
> > -	union voltage_object *voltage_object = NULL;
> > -
> > -	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > -				   &frev, &crev, &data_offset)) {
> > -		voltage_info = (union voltage_object_info *)
> > -			(adev->mode_info.atom_context->bios +
> data_offset);
> > -
> > -		switch (frev) {
> > -		case 3:
> > -			switch (crev) {
> > -			case 1:
> > -				voltage_object = (union voltage_object *)
> > -
> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > -
> voltage_type, voltage_mode);
> > -				if (voltage_object) {
> > -					ATOM_GPIO_VOLTAGE_OBJECT_V3
> *gpio =
> > -						&voltage_object-
> >v3.asGpioVoltageObj;
> > -					VOLTAGE_LUT_ENTRY_V2 *lut;
> > -					if (gpio->ucGpioEntryNum >
> MAX_VOLTAGE_ENTRIES)
> > -						return -EINVAL;
> > -					lut = &gpio->asVolGpioLut[0];
> > -					for (i = 0; i < gpio->ucGpioEntryNum;
> i++) {
> > -						voltage_table-
> >entries[i].value =
> > -							le16_to_cpu(lut-
> >usVoltageValue);
> > -						voltage_table-
> >entries[i].smio_low =
> > -							le32_to_cpu(lut-
> >ulVoltageId);
> > -						lut =
> (VOLTAGE_LUT_ENTRY_V2 *)
> > -							((u8 *)lut +
> sizeof(VOLTAGE_LUT_ENTRY_V2));
> > -					}
> > -					voltage_table->mask_low =
> le32_to_cpu(gpio->ulGpioMaskVal);
> > -					voltage_table->count = gpio-
> >ucGpioEntryNum;
> > -					voltage_table->phase_delay = gpio-
> >ucPhaseDelay;
> > -					return 0;
> > -				}
> > -				break;
> > -			default:
> > -				DRM_ERROR("unknown voltage object
> table\n");
> > -				return -EINVAL;
> > -			}
> > -			break;
> > -		default:
> > -			DRM_ERROR("unknown voltage object table\n");
> > -			return -EINVAL;
> > -		}
> > -	}
> > -	return -EINVAL;
> > -}
> > -
> > -union vram_info {
> > -	struct _ATOM_VRAM_INFO_V3 v1_3;
> > -	struct _ATOM_VRAM_INFO_V4 v1_4;
> > -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> > -};
> > -
> > -#define MEM_ID_MASK           0xff000000
> > -#define MEM_ID_SHIFT          24
> > -#define CLOCK_RANGE_MASK      0x00ffffff
> > -#define CLOCK_RANGE_SHIFT     0
> > -#define LOW_NIBBLE_MASK       0xf
> > -#define DATA_EQU_PREV         0
> > -#define DATA_FROM_TABLE       4
> > -
> > -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> > -				      u8 module_index,
> > -				      struct atom_mc_reg_table *reg_table)
> > -{
> > -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> > -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> > -	u32 i = 0, j;
> > -	u16 data_offset, size;
> > -	union vram_info *vram_info;
> > -
> > -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> > -
> > -	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > -				   &frev, &crev, &data_offset)) {
> > -		vram_info = (union vram_info *)
> > -			(adev->mode_info.atom_context->bios +
> data_offset);
> > -		switch (frev) {
> > -		case 1:
> > -			DRM_ERROR("old table version %d, %d\n", frev,
> crev);
> > -			return -EINVAL;
> > -		case 2:
> > -			switch (crev) {
> > -			case 1:
> > -				if (module_index < vram_info-
> >v2_1.ucNumOfVRAMModule) {
> > -					ATOM_INIT_REG_BLOCK *reg_block
> =
> > -						(ATOM_INIT_REG_BLOCK *)
> > -						((u8 *)vram_info +
> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> > -
> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> > -
> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> > -						((u8 *)reg_block + (2 *
> sizeof(u16)) +
> > -						 le16_to_cpu(reg_block-
> >usRegIndexTblSize));
> > -					ATOM_INIT_REG_INDEX_FORMAT
> *format = &reg_block->asRegIndexBuf[0];
> > -					num_entries =
> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> > -
> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> > -					if (num_entries >
> VBIOS_MC_REGISTER_ARRAY_SIZE)
> > -						return -EINVAL;
> > -					while (i < num_entries) {
> > -						if (format-
> >ucPreRegDataLength & ACCESS_PLACEHOLDER)
> > -							break;
> > -						reg_table-
> >mc_reg_address[i].s1 =
> > -
> 	(u16)(le16_to_cpu(format->usRegIndex));
> > -						reg_table-
> >mc_reg_address[i].pre_reg_data =
> > -							(u8)(format-
> >ucPreRegDataLength);
> > -						i++;
> > -						format =
> (ATOM_INIT_REG_INDEX_FORMAT *)
> > -							((u8 *)format +
> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> > -					}
> > -					reg_table->last = i;
> > -					while ((le32_to_cpu(*(u32
> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> > -					       (num_ranges <
> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> > -						t_mem_id =
> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> > -								>>
> MEM_ID_SHIFT);
> > -						if (module_index ==
> t_mem_id) {
> > -							reg_table-
> >mc_reg_table_entry[num_ranges].mclk_max =
> > -
> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> > -								      >>
> CLOCK_RANGE_SHIFT);
> > -							for (i = 0, j = 1; i <
> reg_table->last; i++) {
> > -								if ((reg_table-
> >mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> DATA_FROM_TABLE) {
> > -
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> > -
> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> > -									j++;
> > -								} else if
> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> DATA_EQU_PREV) {
> > -
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> > -
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> > -								}
> > -							}
> > -							num_ranges++;
> > -						}
> > -						reg_data =
> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> > -							((u8 *)reg_data +
> le16_to_cpu(reg_block->usRegDataBlkSize));
> > -					}
> > -					if (le32_to_cpu(*(u32 *)reg_data) !=
> END_OF_REG_DATA_BLOCK)
> > -						return -EINVAL;
> > -					reg_table->num_entries =
> num_ranges;
> > -				} else
> > -					return -EINVAL;
> > -				break;
> > -			default:
> > -				DRM_ERROR("Unknown table
> version %d, %d\n", frev, crev);
> > -				return -EINVAL;
> > -			}
> > -			break;
> > -		default:
> > -			DRM_ERROR("Unknown table version %d, %d\n",
> frev, crev);
> > -			return -EINVAL;
> > -		}
> > -		return 0;
> > -	}
> > -	return -EINVAL;
> > -}
> > -
> >   bool amdgpu_atombios_has_gpu_virtualization_table(struct
> amdgpu_device *adev)
> >   {
> >   	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> > index 27e74b1fc260..cb5649298dcb 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> > @@ -160,26 +160,6 @@ int amdgpu_atombios_get_clock_dividers(struct
> amdgpu_device *adev,
> >   				       bool strobe_mode,
> >   				       struct atom_clock_dividers *dividers);
> >
> > -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
> *adev,
> > -					    u32 clock,
> > -					    bool strobe_mode,
> > -					    struct atom_mpll_param
> *mpll_param);
> > -
> > -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device
> *adev,
> > -					     u32 eng_clock, u32 mem_clock);
> > -
> > -bool
> > -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> > -				u8 voltage_type, u8 voltage_mode);
> > -
> > -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> > -				      u8 voltage_type, u8 voltage_mode,
> > -				      struct atom_voltage_table
> *voltage_table);
> > -
> > -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> > -				      u8 module_index,
> > -				      struct atom_mc_reg_table *reg_table);
> > -
> >   bool amdgpu_atombios_has_gpu_virtualization_table(struct
> amdgpu_device *adev);
> >
> >   void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev,
> bool lock);
> > @@ -190,21 +170,11 @@ void
> amdgpu_atombios_scratch_regs_set_backlight_level(struct amdgpu_device
> *adev
> >   bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device
> *adev);
> >
> >   void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool
> to_le);
> > -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> voltage_type,
> > -			     u16 voltage_id, u16 *voltage);
> > -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> amdgpu_device *adev,
> > -						      u16 *voltage,
> > -						      u16 leakage_idx);
> > -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> *adev,
> > -					  u16 *vddc, u16 *vddci, u16 *mvdd);
> >   int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
> >   				       u8 clock_type,
> >   				       u32 clock,
> >   				       bool strobe_mode,
> >   				       struct atom_clock_dividers *dividers);
> > -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> > -			      u8 voltage_type,
> > -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> >
> >   int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
> >   				   uint32_t table,
> 
> 
> Whether used in legacy or new logic, atombios table parsing/execution
> should be kept as separate logic. These shouldn't be moved along with dpm.
[Quan, Evan] Are you suggesting another place holder for those atombios APIs? Like legacy_atombios.c?
> 
> 
> > diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > index 2e295facd086..cdf724dcf832 100644
> > --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > @@ -404,6 +404,7 @@ struct amd_pm_funcs {
> >   	int (*get_dpm_clock_table)(void *handle,
> >   				   struct dpm_clocks *clock_table);
> >   	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t
> *size);
> > +	int (*change_power_state)(void *handle);
> >   };
> >
> >   struct metrics_table_header {
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index ecaf0081bc31..c6801d10cde6 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -34,113 +34,9 @@
> >
> >   #define WIDTH_4K 3840
> >
> > -#define amdgpu_dpm_pre_set_power_state(adev) \
> > -		((adev)->powerplay.pp_funcs-
> >pre_set_power_state((adev)->powerplay.pp_handle))
> > -
> > -#define amdgpu_dpm_post_set_power_state(adev) \
> > -		((adev)->powerplay.pp_funcs-
> >post_set_power_state((adev)->powerplay.pp_handle))
> > -
> > -#define amdgpu_dpm_display_configuration_changed(adev) \
> > -		((adev)->powerplay.pp_funcs-
> >display_configuration_changed((adev)->powerplay.pp_handle))
> > -
> > -#define amdgpu_dpm_print_power_state(adev, ps) \
> > -		((adev)->powerplay.pp_funcs->print_power_state((adev)-
> >powerplay.pp_handle, (ps)))
> > -
> > -#define amdgpu_dpm_vblank_too_short(adev) \
> > -		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
> >powerplay.pp_handle))
> > -
> >   #define amdgpu_dpm_enable_bapm(adev, e) \
> >   		((adev)->powerplay.pp_funcs->enable_bapm((adev)-
> >powerplay.pp_handle, (e)))
> >
> > -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> > -		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
> >powerplay.pp_handle, (cps), (rps), (equal)))
> > -
> > -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> > -{
> > -	const char *s;
> > -
> > -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> > -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> > -	default:
> > -		s = "none";
> > -		break;
> > -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> > -		s = "battery";
> > -		break;
> > -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> > -		s = "balanced";
> > -		break;
> > -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> > -		s = "performance";
> > -		break;
> > -	}
> > -	printk("\tui class: %s\n", s);
> > -	printk("\tinternal class:");
> > -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> > -	    (class2 == 0))
> > -		pr_cont(" none");
> > -	else {
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> > -			pr_cont(" boot");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> > -			pr_cont(" thermal");
> > -		if (class &
> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> > -			pr_cont(" limited_pwr");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> > -			pr_cont(" rest");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> > -			pr_cont(" forced");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> > -			pr_cont(" 3d_perf");
> > -		if (class &
> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> > -			pr_cont(" ovrdrv");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> > -			pr_cont(" uvd");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> > -			pr_cont(" 3d_low");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> > -			pr_cont(" acpi");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> > -			pr_cont(" uvd_hd2");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> > -			pr_cont(" uvd_hd");
> > -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> > -			pr_cont(" uvd_sd");
> > -		if (class2 &
> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> > -			pr_cont(" limited_pwr2");
> > -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> > -			pr_cont(" ulv");
> > -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> > -			pr_cont(" uvd_mvc");
> > -	}
> > -	pr_cont("\n");
> > -}
> > -
> > -void amdgpu_dpm_print_cap_info(u32 caps)
> > -{
> > -	printk("\tcaps:");
> > -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> > -		pr_cont(" single_disp");
> > -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> > -		pr_cont(" video");
> > -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> > -		pr_cont(" no_dc");
> > -	pr_cont("\n");
> > -}
> > -
> > -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> > -				struct amdgpu_ps *rps)
> > -{
> > -	printk("\tstatus:");
> > -	if (rps == adev->pm.dpm.current_ps)
> > -		pr_cont(" c");
> > -	if (rps == adev->pm.dpm.requested_ps)
> > -		pr_cont(" r");
> > -	if (rps == adev->pm.dpm.boot_ps)
> > -		pr_cont(" b");
> > -	pr_cont("\n");
> > -}
> > -
> >   static void amdgpu_dpm_get_active_displays(struct amdgpu_device
> *adev)
> >   {
> >   	struct drm_device *ddev = adev_to_drm(adev);
> > @@ -161,7 +57,6 @@ static void amdgpu_dpm_get_active_displays(struct
> amdgpu_device *adev)
> >   	}
> >   }
> >
> > -
> >   u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
> >   {
> >   	struct drm_device *dev = adev_to_drm(adev);
> > @@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct
> amdgpu_device *adev)
> >   	return vrefresh;
> >   }
> >
> > -union power_info {
> > -	struct _ATOM_POWERPLAY_INFO info;
> > -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> > -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> > -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> > -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> > -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> > -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> > -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> > -};
> > -
> > -union fan_info {
> > -	struct _ATOM_PPLIB_FANTABLE fan;
> > -	struct _ATOM_PPLIB_FANTABLE2 fan2;
> > -	struct _ATOM_PPLIB_FANTABLE3 fan3;
> > -};
> > -
> > -static int amdgpu_parse_clk_voltage_dep_table(struct
> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> > -
> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> > -{
> > -	u32 size = atom_table->ucNumEntries *
> > -		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> > -	int i;
> > -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> > -
> > -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> > -	if (!amdgpu_table->entries)
> > -		return -ENOMEM;
> > -
> > -	entry = &atom_table->entries[0];
> > -	for (i = 0; i < atom_table->ucNumEntries; i++) {
> > -		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >usClockLow) |
> > -			(entry->ucClockHigh << 16);
> > -		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >usVoltage);
> > -		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> *)
> > -			((u8 *)entry +
> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> > -	}
> > -	amdgpu_table->count = atom_table->ucNumEntries;
> > -
> > -	return 0;
> > -}
> > -
> > -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> > -{
> > -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > -	union power_info *power_info;
> > -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > -	u16 data_offset;
> > -	u8 frev, crev;
> > -
> > -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > -				   &frev, &crev, &data_offset))
> > -		return -EINVAL;
> > -	power_info = (union power_info *)(mode_info->atom_context-
> >bios + data_offset);
> > -
> > -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >pplib.ulPlatformCaps);
> > -	adev->pm.dpm.backbias_response_time =
> le16_to_cpu(power_info->pplib.usBackbiasTime);
> > -	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
> >pplib.usVoltageTime);
> > -
> > -	return 0;
> > -}
> > -
> > -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> > -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> > -
> > -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
> > -{
> > -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > -	union power_info *power_info;
> > -	union fan_info *fan_info;
> > -	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> > -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > -	u16 data_offset;
> > -	u8 frev, crev;
> > -	int ret, i;
> > -
> > -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > -				   &frev, &crev, &data_offset))
> > -		return -EINVAL;
> > -	power_info = (union power_info *)(mode_info->atom_context-
> >bios + data_offset);
> > -
> > -	/* fan table */
> > -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> > -		if (power_info->pplib3.usFanTableOffset) {
> > -			fan_info = (union fan_info *)(mode_info-
> >atom_context->bios + data_offset +
> > -						      le16_to_cpu(power_info-
> >pplib3.usFanTableOffset));
> > -			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> > -			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
> >fan.usTMin);
> > -			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
> >fan.usTMed);
> > -			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
> >fan.usTHigh);
> > -			adev->pm.dpm.fan.pwm_min =
> le16_to_cpu(fan_info->fan.usPWMMin);
> > -			adev->pm.dpm.fan.pwm_med =
> le16_to_cpu(fan_info->fan.usPWMMed);
> > -			adev->pm.dpm.fan.pwm_high =
> le16_to_cpu(fan_info->fan.usPWMHigh);
> > -			if (fan_info->fan.ucFanTableFormat >= 2)
> > -				adev->pm.dpm.fan.t_max =
> le16_to_cpu(fan_info->fan2.usTMax);
> > -			else
> > -				adev->pm.dpm.fan.t_max = 10900;
> > -			adev->pm.dpm.fan.cycle_delay = 100000;
> > -			if (fan_info->fan.ucFanTableFormat >= 3) {
> > -				adev->pm.dpm.fan.control_mode =
> fan_info->fan3.ucFanControlMode;
> > -				adev->pm.dpm.fan.default_max_fan_pwm
> =
> > -					le16_to_cpu(fan_info-
> >fan3.usFanPWMMax);
> > -				adev-
> >pm.dpm.fan.default_fan_output_sensitivity = 4836;
> > -				adev->pm.dpm.fan.fan_output_sensitivity =
> > -					le16_to_cpu(fan_info-
> >fan3.usFanOutputSensitivity);
> > -			}
> > -			adev->pm.dpm.fan.ucode_fan_control = true;
> > -		}
> > -	}
> > -
> > -	/* clock dependancy tables, shedding tables */
> > -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> > -		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> > -			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usVddcDependencyOnSCLKOffset));
> > -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddc_dependency_on_sclk,
> > -								 dep_table);
> > -			if (ret) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return ret;
> > -			}
> > -		}
> > -		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> > -			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usVddciDependencyOnMCLKOffset));
> > -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddci_dependency_on_mclk,
> > -								 dep_table);
> > -			if (ret) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return ret;
> > -			}
> > -		}
> > -		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> > -			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usVddcDependencyOnMCLKOffset));
> > -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddc_dependency_on_mclk,
> > -								 dep_table);
> > -			if (ret) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return ret;
> > -			}
> > -		}
> > -		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
> {
> > -			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usMvddDependencyOnMCLKOffset));
> > -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> > -								 dep_table);
> > -			if (ret) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return ret;
> > -			}
> > -		}
> > -		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> > -			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> > -				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usMaxClockVoltageOnDCOffset));
> > -			if (clk_v->ucNumEntries) {
> > -				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> > -					le16_to_cpu(clk_v-
> >entries[0].usSclkLow) |
> > -					(clk_v->entries[0].ucSclkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> > -					le16_to_cpu(clk_v-
> >entries[0].usMclkLow) |
> > -					(clk_v->entries[0].ucMclkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> > -					le16_to_cpu(clk_v-
> >entries[0].usVddc);
> > -				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> > -					le16_to_cpu(clk_v-
> >entries[0].usVddci);
> > -			}
> > -		}
> > -		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
> {
> > -			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> > -				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib4.usVddcPhaseShedLimitsTableOffset));
> > -			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> > -
> > -			adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> > -				kcalloc(psl->ucNumEntries,
> > -					sizeof(struct
> amdgpu_phase_shedding_limits_entry),
> > -					GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -
> > -			entry = &psl->entries[0];
> > -			for (i = 0; i < psl->ucNumEntries; i++) {
> > -				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> > -					le16_to_cpu(entry->usSclkLow) |
> (entry->ucSclkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> > -					le16_to_cpu(entry->usMclkLow) |
> (entry->ucMclkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> > -					le16_to_cpu(entry->usVoltage);
> > -				entry =
> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> > -			}
> > -			adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.count =
> > -				psl->ucNumEntries;
> > -		}
> > -	}
> > -
> > -	/* cac data */
> > -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> > -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >pplib5.ulTDPLimit);
> > -		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
> >pplib5.ulNearTDPLimit);
> > -		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >pm.dpm.near_tdp_limit;
> > -		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
> >pplib5.usTDPODLimit);
> > -		if (adev->pm.dpm.tdp_od_limit)
> > -			adev->pm.dpm.power_control = true;
> > -		else
> > -			adev->pm.dpm.power_control = false;
> > -		adev->pm.dpm.tdp_adjustment = 0;
> > -		adev->pm.dpm.sq_ramping_threshold =
> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> > -		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
> >pplib5.ulCACLeakage);
> > -		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
> >pplib5.usLoadLineSlope);
> > -		if (power_info->pplib5.usCACLeakageTableOffset) {
> > -			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> > -				(ATOM_PPLIB_CAC_Leakage_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(power_info-
> >pplib5.usCACLeakageTableOffset));
> > -			ATOM_PPLIB_CAC_Leakage_Record *entry;
> > -			u32 size = cac_table->ucNumEntries * sizeof(struct
> amdgpu_cac_leakage_table);
> > -			adev->pm.dpm.dyn_state.cac_leakage_table.entries
> = kzalloc(size, GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			entry = &cac_table->entries[0];
> > -			for (i = 0; i < cac_table->ucNumEntries; i++) {
> > -				if (adev->pm.dpm.platform_caps &
> ATOM_PP_PLATFORM_CAP_EVV) {
> > -					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> > -						le16_to_cpu(entry-
> >usVddc1);
> > -					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> > -						le16_to_cpu(entry-
> >usVddc2);
> > -					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> > -						le16_to_cpu(entry-
> >usVddc3);
> > -				} else {
> > -					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> > -						le16_to_cpu(entry->usVddc);
> > -					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> > -						le32_to_cpu(entry-
> >ulLeakageValue);
> > -				}
> > -				entry = (ATOM_PPLIB_CAC_Leakage_Record
> *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> > -			}
> > -			adev->pm.dpm.dyn_state.cac_leakage_table.count
> = cac_table->ucNumEntries;
> > -		}
> > -	}
> > -
> > -	/* ext tables */
> > -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> > -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> (ATOM_PPLIB_EXTENDEDHEADER *)
> > -			(mode_info->atom_context->bios + data_offset +
> > -			 le16_to_cpu(power_info-
> >pplib3.usExtendendedHeaderOffset));
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> > -			ext_hdr->usVCETableOffset) {
> > -			VCEClockInfoArray *array = (VCEClockInfoArray *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1);
> > -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> *limits =
> > -
> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1 +
> > -				 1 + array->ucNumEntries *
> sizeof(VCEClockInfo));
> > -			ATOM_PPLIB_VCE_State_Table *states =
> > -				(ATOM_PPLIB_VCE_State_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1 +
> > -				 1 + (array->ucNumEntries * sizeof
> (VCEClockInfo)) +
> > -				 1 + (limits->numEntries *
> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> > -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> *entry;
> > -			ATOM_PPLIB_VCE_State_Record *state_entry;
> > -			VCEClockInfo *vce_clk;
> > -			u32 size = limits->numEntries *
> > -				sizeof(struct
> amdgpu_vce_clock_voltage_dependency_entry);
> > -			adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> > -				kzalloc(size, GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> > -				limits->numEntries;
> > -			entry = &limits->entries[0];
> > -			state_entry = &states->entries[0];
> > -			for (i = 0; i < limits->numEntries; i++) {
> > -				vce_clk = (VCEClockInfo *)
> > -					((u8 *)&array->entries[0] +
> > -					 (entry->ucVCEClockInfoIndex *
> sizeof(VCEClockInfo)));
> > -				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> =
> > -					le16_to_cpu(vce_clk->usEVClkLow) |
> (vce_clk->ucEVClkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> =
> > -					le16_to_cpu(vce_clk->usECClkLow) |
> (vce_clk->ucECClkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> > -					le16_to_cpu(entry->usVoltage);
> > -				entry =
> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> > -			}
> > -			adev->pm.dpm.num_of_vce_states =
> > -					states->numEntries >
> AMD_MAX_VCE_LEVELS ?
> > -					AMD_MAX_VCE_LEVELS : states-
> >numEntries;
> > -			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
> {
> > -				vce_clk = (VCEClockInfo *)
> > -					((u8 *)&array->entries[0] +
> > -					 (state_entry->ucVCEClockInfoIndex
> * sizeof(VCEClockInfo)));
> > -				adev->pm.dpm.vce_states[i].evclk =
> > -					le16_to_cpu(vce_clk->usEVClkLow) |
> (vce_clk->ucEVClkHigh << 16);
> > -				adev->pm.dpm.vce_states[i].ecclk =
> > -					le16_to_cpu(vce_clk->usECClkLow) |
> (vce_clk->ucECClkHigh << 16);
> > -				adev->pm.dpm.vce_states[i].clk_idx =
> > -					state_entry->ucClockInfoIndex &
> 0x3f;
> > -				adev->pm.dpm.vce_states[i].pstate =
> > -					(state_entry->ucClockInfoIndex &
> 0xc0) >> 6;
> > -				state_entry =
> (ATOM_PPLIB_VCE_State_Record *)
> > -					((u8 *)state_entry +
> sizeof(ATOM_PPLIB_VCE_State_Record));
> > -			}
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> > -			ext_hdr->usUVDTableOffset) {
> > -			UVDClockInfoArray *array = (UVDClockInfoArray *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> 1);
> > -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> *limits =
> > -
> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> 1 +
> > -				 1 + (array->ucNumEntries * sizeof
> (UVDClockInfo)));
> > -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> *entry;
> > -			u32 size = limits->numEntries *
> > -				sizeof(struct
> amdgpu_uvd_clock_voltage_dependency_entry);
> > -			adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> > -				kzalloc(size, GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> > -				limits->numEntries;
> > -			entry = &limits->entries[0];
> > -			for (i = 0; i < limits->numEntries; i++) {
> > -				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> > -					((u8 *)&array->entries[0] +
> > -					 (entry->ucUVDClockInfoIndex *
> sizeof(UVDClockInfo)));
> > -				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> > -					le16_to_cpu(uvd_clk->usVClkLow) |
> (uvd_clk->ucVClkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> > -					le16_to_cpu(uvd_clk->usDClkLow) |
> (uvd_clk->ucDClkHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> > -					le16_to_cpu(entry->usVoltage);
> > -				entry =
> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> > -			}
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> > -			ext_hdr->usSAMUTableOffset) {
> > -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> > -				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
> + 1);
> > -			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> > -			u32 size = limits->numEntries *
> > -				sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> > -			adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> > -				kzalloc(size, GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> > -				limits->numEntries;
> > -			entry = &limits->entries[0];
> > -			for (i = 0; i < limits->numEntries; i++) {
> > -				adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> > -					le16_to_cpu(entry->usSAMClockLow)
> | (entry->ucSAMClockHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
> > -					le16_to_cpu(entry->usVoltage);
> > -				entry =
> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> > -			}
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> > -		    ext_hdr->usPPMTableOffset) {
> > -			ATOM_PPLIB_PPM_Table *ppm =
> (ATOM_PPLIB_PPM_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> > -			adev->pm.dpm.dyn_state.ppm_table =
> > -				kzalloc(sizeof(struct amdgpu_ppm_table),
> GFP_KERNEL);
> > -			if (!adev->pm.dpm.dyn_state.ppm_table) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			adev->pm.dpm.dyn_state.ppm_table->ppm_design
> = ppm->ucPpmDesign;
> > -			adev->pm.dpm.dyn_state.ppm_table-
> >cpu_core_number =
> > -				le16_to_cpu(ppm->usCpuCoreNumber);
> > -			adev->pm.dpm.dyn_state.ppm_table-
> >platform_tdp =
> > -				le32_to_cpu(ppm->ulPlatformTDP);
> > -			adev->pm.dpm.dyn_state.ppm_table-
> >small_ac_platform_tdp =
> > -				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> > -			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
> =
> > -				le32_to_cpu(ppm->ulPlatformTDC);
> > -			adev->pm.dpm.dyn_state.ppm_table-
> >small_ac_platform_tdc =
> > -				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> > -			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> > -				le32_to_cpu(ppm->ulApuTDP);
> > -			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> > -				le32_to_cpu(ppm->ulDGpuTDP);
> > -			adev->pm.dpm.dyn_state.ppm_table-
> >dgpu_ulv_power =
> > -				le32_to_cpu(ppm->ulDGpuUlvPower);
> > -			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> > -				le32_to_cpu(ppm->ulTjmax);
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> > -			ext_hdr->usACPTableOffset) {
> > -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> > -				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr->usACPTableOffset) +
> 1);
> > -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> > -			u32 size = limits->numEntries *
> > -				sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> > -			adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> > -				kzalloc(size, GFP_KERNEL);
> > -			if (!adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> > -				limits->numEntries;
> > -			entry = &limits->entries[0];
> > -			for (i = 0; i < limits->numEntries; i++) {
> > -				adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
> > -					le16_to_cpu(entry->usACPClockLow)
> | (entry->ucACPClockHigh << 16);
> > -				adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> > -					le16_to_cpu(entry->usVoltage);
> > -				entry =
> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> > -					((u8 *)entry +
> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> > -			}
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> > -			ext_hdr->usPowerTuneTableOffset) {
> > -			u8 rev = *(u8 *)(mode_info->atom_context->bios +
> data_offset +
> > -					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > -			ATOM_PowerTune_Table *pt;
> > -			adev->pm.dpm.dyn_state.cac_tdp_table =
> > -				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
> GFP_KERNEL);
> > -			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> > -
> 	amdgpu_free_extended_power_table(adev);
> > -				return -ENOMEM;
> > -			}
> > -			if (rev > 0) {
> > -				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> > -					(mode_info->atom_context->bios +
> data_offset +
> > -					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > -				adev->pm.dpm.dyn_state.cac_tdp_table-
> >maximum_power_delivery_limit =
> > -					ppt->usMaximumPowerDeliveryLimit;
> > -				pt = &ppt->power_tune_table;
> > -			} else {
> > -				ATOM_PPLIB_POWERTUNE_Table *ppt =
> (ATOM_PPLIB_POWERTUNE_Table *)
> > -					(mode_info->atom_context->bios +
> data_offset +
> > -					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > -				adev->pm.dpm.dyn_state.cac_tdp_table-
> >maximum_power_delivery_limit = 255;
> > -				pt = &ppt->power_tune_table;
> > -			}
> > -			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
> le16_to_cpu(pt->usTDP);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >configurable_tdp =
> > -				le16_to_cpu(pt->usConfigurableTDP);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
> le16_to_cpu(pt->usTDC);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >battery_power_limit =
> > -				le16_to_cpu(pt->usBatteryPowerLimit);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >small_power_limit =
> > -				le16_to_cpu(pt->usSmallPowerLimit);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >low_cac_leakage =
> > -				le16_to_cpu(pt->usLowCACLeakage);
> > -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >high_cac_leakage =
> > -				le16_to_cpu(pt->usHighCACLeakage);
> > -		}
> > -		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> > -				ext_hdr->usSclkVddgfxTableOffset) {
> > -			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > -				(mode_info->atom_context->bios +
> data_offset +
> > -				 le16_to_cpu(ext_hdr-
> >usSclkVddgfxTableOffset));
> > -			ret = amdgpu_parse_clk_voltage_dep_table(
> > -					&adev-
> >pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> > -					dep_table);
> > -			if (ret) {
> > -				kfree(adev-
> >pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> > -				return ret;
> > -			}
> > -		}
> > -	}
> > -
> > -	return 0;
> > -}
> > -
> > -void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
> > -{
> > -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >pm.dpm.dyn_state;
> > -
> > -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> > -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> > -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> > -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> > -	kfree(dyn_state->cac_leakage_table.entries);
> > -	kfree(dyn_state->phase_shedding_limits_table.entries);
> > -	kfree(dyn_state->ppm_table);
> > -	kfree(dyn_state->cac_tdp_table);
> > -	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> > -	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> > -	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> > -	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> > -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> > -}
> > -
> > -static const char *pp_lib_thermal_controller_names[] = {
> > -	"NONE",
> > -	"lm63",
> > -	"adm1032",
> > -	"adm1030",
> > -	"max6649",
> > -	"lm64",
> > -	"f75375",
> > -	"RV6xx",
> > -	"RV770",
> > -	"adt7473",
> > -	"NONE",
> > -	"External GPIO",
> > -	"Evergreen",
> > -	"emc2103",
> > -	"Sumo",
> > -	"Northern Islands",
> > -	"Southern Islands",
> > -	"lm96163",
> > -	"Sea Islands",
> > -	"Kaveri/Kabini",
> > -};
> > -
> > -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> > -{
> > -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> > -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > -	ATOM_PPLIB_THERMALCONTROLLER *controller;
> > -	struct amdgpu_i2c_bus_rec i2c_bus;
> > -	u16 data_offset;
> > -	u8 frev, crev;
> > -
> > -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > -				   &frev, &crev, &data_offset))
> > -		return;
> > -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> > -		(mode_info->atom_context->bios + data_offset);
> > -	controller = &power_table->sThermalController;
> > -
> > -	/* add the i2c bus for thermal/fan chip */
> > -	if (controller->ucType > 0) {
> > -		if (controller->ucFanParameters &
> ATOM_PP_FANPARAMETERS_NOFAN)
> > -			adev->pm.no_fan = true;
> > -		adev->pm.fan_pulses_per_revolution =
> > -			controller->ucFanParameters &
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> ASK;
> > -		if (adev->pm.fan_pulses_per_revolution) {
> > -			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> > -			adev->pm.fan_max_rpm = controller-
> >ucFanMaxRPM;
> > -		}
> > -		if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_RV6XX;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_RV770) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_RV770;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_EVERGREEN;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_SUMO) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_SUMO;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> > -			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> > -		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> > -			DRM_INFO("External GPIO thermal controller %s fan
> control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_EXTERNAL_GPIO;
> > -		} else if (controller->ucType ==
> > -
> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> > -			DRM_INFO("ADT7473 with internal thermal
> controller %s fan control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> > -		} else if (controller->ucType ==
> > -
> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> > -			DRM_INFO("EMC2103 with internal thermal
> controller %s fan control\n",
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> > -		} else if (controller->ucType <
> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> > -			DRM_INFO("Possible %s thermal controller at
> 0x%02x %s fan control\n",
> > -
> pp_lib_thermal_controller_names[controller->ucType],
> > -				 controller->ucI2cAddress >> 1,
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -			adev->pm.int_thermal_type =
> THERMAL_TYPE_EXTERNAL;
> > -			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
> controller->ucI2cLine);
> > -			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
> &i2c_bus);
> > -			if (adev->pm.i2c_bus) {
> > -				struct i2c_board_info info = { };
> > -				const char *name =
> pp_lib_thermal_controller_names[controller->ucType];
> > -				info.addr = controller->ucI2cAddress >> 1;
> > -				strlcpy(info.type, name, sizeof(info.type));
> > -				i2c_new_client_device(&adev->pm.i2c_bus-
> >adapter, &info);
> > -			}
> > -		} else {
> > -			DRM_INFO("Unknown thermal controller type %d at
> 0x%02x %s fan control\n",
> > -				 controller->ucType,
> > -				 controller->ucI2cAddress >> 1,
> > -				 (controller->ucFanParameters &
> > -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > -		}
> > -	}
> > -}
> > -
> > -struct amd_vce_state*
> > -amdgpu_get_vce_clock_state(void *handle, u32 idx)
> > -{
> > -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -
> > -	if (idx < adev->pm.dpm.num_of_vce_states)
> > -		return &adev->pm.dpm.vce_states[idx];
> > -
> > -	return NULL;
> > -}
> > -
> >   int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
> >   {
> >   	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> > @@ -1243,211 +465,6 @@ void
> amdgpu_dpm_thermal_work_handler(struct work_struct *work)
> >   	amdgpu_pm_compute_clocks(adev);
> >   }
> >
> > -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> amdgpu_device *adev,
> > -						     enum
> amd_pm_state_type dpm_state)
> > -{
> > -	int i;
> > -	struct amdgpu_ps *ps;
> > -	u32 ui_class;
> > -	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> > -		true : false;
> > -
> > -	/* check if the vblank period is too short to adjust the mclk */
> > -	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
> {
> > -		if (amdgpu_dpm_vblank_too_short(adev))
> > -			single_display = false;
> > -	}
> > -
> > -	/* certain older asics have a separare 3D performance state,
> > -	 * so try that first if the user selected performance
> > -	 */
> > -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> > -		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> > -	/* balanced states don't exist at the moment */
> > -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> > -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > -
> > -restart_search:
> > -	/* Pick the best power state based on current conditions */
> > -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> > -		ps = &adev->pm.dpm.ps[i];
> > -		ui_class = ps->class &
> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> > -		switch (dpm_state) {
> > -		/* user states */
> > -		case POWER_STATE_TYPE_BATTERY:
> > -			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> > -				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > -					if (single_display)
> > -						return ps;
> > -				} else
> > -					return ps;
> > -			}
> > -			break;
> > -		case POWER_STATE_TYPE_BALANCED:
> > -			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> > -				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > -					if (single_display)
> > -						return ps;
> > -				} else
> > -					return ps;
> > -			}
> > -			break;
> > -		case POWER_STATE_TYPE_PERFORMANCE:
> > -			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> > -				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > -					if (single_display)
> > -						return ps;
> > -				} else
> > -					return ps;
> > -			}
> > -			break;
> > -		/* internal states */
> > -		case POWER_STATE_TYPE_INTERNAL_UVD:
> > -			if (adev->pm.dpm.uvd_ps)
> > -				return adev->pm.dpm.uvd_ps;
> > -			else
> > -				break;
> > -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> > -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> > -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> > -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> > -			if (ps->class2 &
> ATOM_PPLIB_CLASSIFICATION2_MVC)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_BOOT:
> > -			return adev->pm.dpm.boot_ps;
> > -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> > -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_ACPI:
> > -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_ULV:
> > -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> > -				return ps;
> > -			break;
> > -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> > -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> > -				return ps;
> > -			break;
> > -		default:
> > -			break;
> > -		}
> > -	}
> > -	/* use a fallback state if we didn't match */
> > -	switch (dpm_state) {
> > -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> > -		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> > -		goto restart_search;
> > -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> > -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> > -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> > -		if (adev->pm.dpm.uvd_ps) {
> > -			return adev->pm.dpm.uvd_ps;
> > -		} else {
> > -			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > -			goto restart_search;
> > -		}
> > -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> > -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> > -		goto restart_search;
> > -	case POWER_STATE_TYPE_INTERNAL_ACPI:
> > -		dpm_state = POWER_STATE_TYPE_BATTERY;
> > -		goto restart_search;
> > -	case POWER_STATE_TYPE_BATTERY:
> > -	case POWER_STATE_TYPE_BALANCED:
> > -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> > -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > -		goto restart_search;
> > -	default:
> > -		break;
> > -	}
> > -
> > -	return NULL;
> > -}
> > -
> > -static void amdgpu_dpm_change_power_state_locked(struct
> amdgpu_device *adev)
> > -{
> > -	struct amdgpu_ps *ps;
> > -	enum amd_pm_state_type dpm_state;
> > -	int ret;
> > -	bool equal = false;
> > -
> > -	/* if dpm init failed */
> > -	if (!adev->pm.dpm_enabled)
> > -		return;
> > -
> > -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> > -		/* add other state override checks here */
> > -		if ((!adev->pm.dpm.thermal_active) &&
> > -		    (!adev->pm.dpm.uvd_active))
> > -			adev->pm.dpm.state = adev->pm.dpm.user_state;
> > -	}
> > -	dpm_state = adev->pm.dpm.state;
> > -
> > -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> > -	if (ps)
> > -		adev->pm.dpm.requested_ps = ps;
> > -	else
> > -		return;
> > -
> > -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >print_power_state) {
> > -		printk("switching from power state:\n");
> > -		amdgpu_dpm_print_power_state(adev, adev-
> >pm.dpm.current_ps);
> > -		printk("switching to power state:\n");
> > -		amdgpu_dpm_print_power_state(adev, adev-
> >pm.dpm.requested_ps);
> > -	}
> > -
> > -	/* update whether vce is active */
> > -	ps->vce_active = adev->pm.dpm.vce_active;
> > -	if (adev->powerplay.pp_funcs->display_configuration_changed)
> > -		amdgpu_dpm_display_configuration_changed(adev);
> > -
> > -	ret = amdgpu_dpm_pre_set_power_state(adev);
> > -	if (ret)
> > -		return;
> > -
> > -	if (adev->powerplay.pp_funcs->check_state_equal) {
> > -		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
> >pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> > -			equal = false;
> > -	}
> > -
> > -	if (equal)
> > -		return;
> > -
> > -	if (adev->powerplay.pp_funcs->set_power_state)
> > -		adev->powerplay.pp_funcs->set_power_state(adev-
> >powerplay.pp_handle);
> > -
> > -	amdgpu_dpm_post_set_power_state(adev);
> > -
> > -	adev->pm.dpm.current_active_crtcs = adev-
> >pm.dpm.new_active_crtcs;
> > -	adev->pm.dpm.current_active_crtc_count = adev-
> >pm.dpm.new_active_crtc_count;
> > -
> > -	if (adev->powerplay.pp_funcs->force_performance_level) {
> > -		if (adev->pm.dpm.thermal_active) {
> > -			enum amd_dpm_forced_level level = adev-
> >pm.dpm.forced_level;
> > -			/* force low perf level for thermal */
> > -			amdgpu_dpm_force_performance_level(adev,
> AMD_DPM_FORCED_LEVEL_LOW);
> > -			/* save the user's level */
> > -			adev->pm.dpm.forced_level = level;
> > -		} else {
> > -			/* otherwise, user selected level */
> > -			amdgpu_dpm_force_performance_level(adev,
> adev->pm.dpm.forced_level);
> > -		}
> > -	}
> > -}
> > -
> >   void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
> >   {
> 
> Rename to amdgpu_dpm_compute_clocks?
[Quan, Evan] Sure, I can do that.
> 
> >   	int i = 0;
> > @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct
> amdgpu_device *adev)
> >   			amdgpu_fence_wait_empty(ring);
> >   	}
> >
> > -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
> > +	if ((adev->family == AMDGPU_FAMILY_SI) ||
> > +	     (adev->family == AMDGPU_FAMILY_KV)) {
> > +		amdgpu_dpm_get_active_displays(adev);
> > +		adev->powerplay.pp_funcs->change_power_state(adev-
> >powerplay.pp_handle);
> 
> It would be clearer if the newly added logic in this function is in
> another patch. This does more than what the patch subject says.
[Quan, Evan] Actually there are no new logic added. These are for "!adev->powerplay.pp_funcs->dispatch_tasks".
Considering there are actually only SI and KV which do not have ->dispatch_tasks() implemented.
So, I used "((adev->family == AMDGPU_FAMILY_SI) ||(adev->family == AMDGPU_FAMILY_KV))" here.
Maybe i should stick with "!adev->powerplay.pp_funcs->dispatch_tasks"?
> 
> > +	} else {
> >   		if (!amdgpu_device_has_dc_support(adev)) {
> > -			mutex_lock(&adev->pm.mutex);
> >   			amdgpu_dpm_get_active_displays(adev);
> >   			adev->pm.pm_display_cfg.num_display = adev-
> >pm.dpm.new_active_crtc_count;
> >   			adev->pm.pm_display_cfg.vrefresh =
> amdgpu_dpm_get_vrefresh(adev);
> > @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct
> amdgpu_device *adev)
> >   				adev->powerplay.pp_funcs-
> >display_configuration_change(
> >   							adev-
> >powerplay.pp_handle,
> >   							&adev-
> >pm.pm_display_cfg);
> > -			mutex_unlock(&adev->pm.mutex);
> >   		}
> >   		amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
> > -	} else {
> > -		mutex_lock(&adev->pm.mutex);
> > -		amdgpu_dpm_get_active_displays(adev);
> > -		amdgpu_dpm_change_power_state_locked(adev);
> > -		mutex_unlock(&adev->pm.mutex);
> >   	}
> >   }
> >
> > @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct
> amdgpu_device *adev, bool enable)
> >   	}
> >   }
> >
> > -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> > -{
> > -	int i;
> > -
> > -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> > -		return;
> > -
> > -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> > -		amdgpu_dpm_print_power_state(adev, &adev-
> >pm.dpm.ps[i]);
> > -
> > -}
> > -
> >   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> enable)
> >   {
> >   	int ret = 0;
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > index 01120b302590..295d2902aef7 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> > @@ -366,24 +366,10 @@ enum amdgpu_display_gap
> >       AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
> >   };
> >
> > -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> > -void amdgpu_dpm_print_cap_info(u32 caps);
> > -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> > -				struct amdgpu_ps *rps);
> >   u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
> >   int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
> amd_pp_sensors sensor,
> >   			   void *data, uint32_t *size);
> >
> > -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> > -
> > -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
> > -void amdgpu_free_extended_power_table(struct amdgpu_device
> *adev);
> > -
> > -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> > -
> > -struct amd_vce_state*
> > -amdgpu_get_vce_clock_state(void *handle, u32 idx);
> > -
> >   int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device
> *adev,
> >   				      uint32_t block_type, bool gate);
> >
> > @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct
> amdgpu_device *adev);
> >   void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool
> enable);
> >   void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
> enable);
> >   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> enable);
> > -void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> >   int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
> uint32_t *smu_version);
> >   int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> enable);
> >   int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> *adev, uint32_t size);
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > index 0fb114adc79f..614d8b6a58ad 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix
> /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
> >
> >   include $(AMD_POWERPLAY)
> >
> > -POWER_MGR-y = amd_powerplay.o
> > +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
> >
> >   POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
> >
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > index 380a5336c74f..90f4c65659e2 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > @@ -36,6 +36,7 @@
> >
> >   #include "gca/gfx_7_2_d.h"
> >   #include "gca/gfx_7_2_sh_mask.h"
> > +#include "legacy_dpm.h"
> >
> >   #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
> >   #define KV_MINIMUM_ENGINE_CLOCK         800
> > @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs kv_dpm_funcs
> = {
> >   	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >   	.check_state_equal = kv_check_state_equal,
> >   	.read_sensor = &kv_dpm_read_sensor,
> > +	.change_power_state = amdgpu_dpm_change_power_state_locked,
> >   };
> >
> >   static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> 
> This could get confused with all APIs that support legacy dpms. This
> file has only a subset of APIs to support legacy dpm. Needs a better
> name - powerplay_ctrl/powerplay_util ?
[Quan, Evan] The "legacy_dpm" refers for those logics used only by si/kv(si_dpm.c, kv_dpm.c).
Considering these logics are not used at default(radeon driver instead of amdgpu driver is used to support those legacy ASICs at default).
We might drop support for them from our amdgpu driver. So, I gather all those APIs and put them in a new holder.
Maybe you wrongly treat it as a new holder for powerplay APIs(used by VI/AI)?  

BR
Evan
> 
> Thanks,
> Lijo
> 
> > new file mode 100644
> > index 000000000000..9427c1026e1d
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> > @@ -0,0 +1,1453 @@
> > +/*
> > + * Copyright 2021 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#include "amdgpu.h"
> > +#include "amdgpu_atombios.h"
> > +#include "amdgpu_i2c.h"
> > +#include "atom.h"
> > +#include "amd_pcie.h"
> > +#include "legacy_dpm.h"
> > +
> > +#define amdgpu_dpm_pre_set_power_state(adev) \
> > +		((adev)->powerplay.pp_funcs-
> >pre_set_power_state((adev)->powerplay.pp_handle))
> > +
> > +#define amdgpu_dpm_post_set_power_state(adev) \
> > +		((adev)->powerplay.pp_funcs-
> >post_set_power_state((adev)->powerplay.pp_handle))
> > +
> > +#define amdgpu_dpm_display_configuration_changed(adev) \
> > +		((adev)->powerplay.pp_funcs-
> >display_configuration_changed((adev)->powerplay.pp_handle))
> > +
> > +#define amdgpu_dpm_print_power_state(adev, ps) \
> > +		((adev)->powerplay.pp_funcs->print_power_state((adev)-
> >powerplay.pp_handle, (ps)))
> > +
> > +#define amdgpu_dpm_vblank_too_short(adev) \
> > +		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
> >powerplay.pp_handle))
> > +
> > +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> > +		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
> >powerplay.pp_handle, (cps), (rps), (equal)))
> > +
> > +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
> *adev,
> > +					    u32 clock,
> > +					    bool strobe_mode,
> > +					    struct atom_mpll_param
> *mpll_param)
> > +{
> > +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> > +	int index = GetIndexIntoMasterTable(COMMAND,
> ComputeMemoryClockParam);
> > +	u8 frev, crev;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> > +
> > +	if (!amdgpu_atom_parse_cmd_header(adev-
> >mode_info.atom_context, index, &frev, &crev))
> > +		return -EINVAL;
> > +
> > +	switch (frev) {
> > +	case 2:
> > +		switch (crev) {
> > +		case 1:
> > +			/* SI */
> > +			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> > +			args.ucInputFlag = 0;
> > +			if (strobe_mode)
> > +				args.ucInputFlag |=
> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> > +
> > +			amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > +
> > +			mpll_param->clkfrac =
> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> > +			mpll_param->clkf =
> le16_to_cpu(args.ulFbDiv.usFbDiv);
> > +			mpll_param->post_div = args.ucPostDiv;
> > +			mpll_param->dll_speed = args.ucDllSpeed;
> > +			mpll_param->bwcntl = args.ucBWCntl;
> > +			mpll_param->vco_mode =
> > +				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> > +			mpll_param->yclk_sel =
> > +				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> > +			mpll_param->qdr =
> > +				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> > +			mpll_param->half_rate =
> > +				(args.ucPllCntlFlag &
> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> > +			break;
> > +		default:
> > +			return -EINVAL;
> > +		}
> > +		break;
> > +	default:
> > +		return -EINVAL;
> > +	}
> > +	return 0;
> > +}
> > +
> > +void amdgpu_atombios_set_engine_dram_timings(struct
> amdgpu_device *adev,
> > +					     u32 eng_clock, u32 mem_clock)
> > +{
> > +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> > +	int index = GetIndexIntoMasterTable(COMMAND,
> DynamicMemorySettings);
> > +	u32 tmp;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +
> > +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> > +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> > +
> > +	args.ulTargetEngineClock = cpu_to_le32(tmp);
> > +	if (mem_clock)
> > +		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> SET_CLOCK_FREQ_MASK);
> > +
> > +	amdgpu_atom_execute_table(adev->mode_info.atom_context,
> index, (uint32_t *)&args);
> > +}
> > +
> > +union firmware_info {
> > +	ATOM_FIRMWARE_INFO info;
> > +	ATOM_FIRMWARE_INFO_V1_2 info_12;
> > +	ATOM_FIRMWARE_INFO_V1_3 info_13;
> > +	ATOM_FIRMWARE_INFO_V1_4 info_14;
> > +	ATOM_FIRMWARE_INFO_V2_1 info_21;
> > +	ATOM_FIRMWARE_INFO_V2_2 info_22;
> > +};
> > +
> > +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> *adev,
> > +					  u16 *vddc, u16 *vddci, u16 *mvdd)
> > +{
> > +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> > +	u8 frev, crev;
> > +	u16 data_offset;
> > +	union firmware_info *firmware_info;
> > +
> > +	*vddc = 0;
> > +	*vddci = 0;
> > +	*mvdd = 0;
> > +
> > +	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > +				   &frev, &crev, &data_offset)) {
> > +		firmware_info =
> > +			(union firmware_info *)(mode_info->atom_context-
> >bios +
> > +						data_offset);
> > +		*vddc = le16_to_cpu(firmware_info-
> >info_14.usBootUpVDDCVoltage);
> > +		if ((frev == 2) && (crev >= 2)) {
> > +			*vddci = le16_to_cpu(firmware_info-
> >info_22.usBootUpVDDCIVoltage);
> > +			*mvdd = le16_to_cpu(firmware_info-
> >info_22.usBootUpMVDDCVoltage);
> > +		}
> > +	}
> > +}
> > +
> > +union set_voltage {
> > +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> > +	struct _SET_VOLTAGE_PARAMETERS v1;
> > +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> > +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> > +};
> > +
> > +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> voltage_type,
> > +			     u16 voltage_id, u16 *voltage)
> > +{
> > +	union set_voltage args;
> > +	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> > +	u8 frev, crev;
> > +
> > +	if (!amdgpu_atom_parse_cmd_header(adev-
> >mode_info.atom_context, index, &frev, &crev))
> > +		return -EINVAL;
> > +
> > +	switch (crev) {
> > +	case 1:
> > +		return -EINVAL;
> > +	case 2:
> > +		args.v2.ucVoltageType =
> SET_VOLTAGE_GET_MAX_VOLTAGE;
> > +		args.v2.ucVoltageMode = 0;
> > +		args.v2.usVoltageLevel = 0;
> > +
> > +		amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > +
> > +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> > +		break;
> > +	case 3:
> > +		args.v3.ucVoltageType = voltage_type;
> > +		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> > +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> > +
> > +		amdgpu_atom_execute_table(adev-
> >mode_info.atom_context, index, (uint32_t *)&args);
> > +
> > +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> > +		break;
> > +	default:
> > +		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> > +		return -EINVAL;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> amdgpu_device *adev,
> > +						      u16 *voltage,
> > +						      u16 leakage_idx)
> > +{
> > +	return amdgpu_atombios_get_max_vddc(adev,
> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> > +}
> > +
> > +union voltage_object_info {
> > +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> > +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> > +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> > +};
> > +
> > +union voltage_object {
> > +	struct _ATOM_VOLTAGE_OBJECT v1;
> > +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> > +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> > +};
> > +
> > +static ATOM_VOLTAGE_OBJECT_V3
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> NFO_V3_1 *v3,
> > +									u8
> voltage_type, u8 voltage_mode)
> > +{
> > +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> > +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> asVoltageObj[0]);
> > +	u8 *start = (u8 *)v3;
> > +
> > +	while (offset < size) {
> > +		ATOM_VOLTAGE_OBJECT_V3 *vo =
> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> > +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
> voltage_type) &&
> > +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
> voltage_mode))
> > +			return vo;
> > +		offset += le16_to_cpu(vo-
> >asGpioVoltageObj.sHeader.usSize);
> > +	}
> > +	return NULL;
> > +}
> > +
> > +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> > +			      u8 voltage_type,
> > +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> > +{
> > +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > +	u8 frev, crev;
> > +	u16 data_offset, size;
> > +	union voltage_object_info *voltage_info;
> > +	union voltage_object *voltage_object = NULL;
> > +
> > +	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > +				   &frev, &crev, &data_offset)) {
> > +		voltage_info = (union voltage_object_info *)
> > +			(adev->mode_info.atom_context->bios +
> data_offset);
> > +
> > +		switch (frev) {
> > +		case 3:
> > +			switch (crev) {
> > +			case 1:
> > +				voltage_object = (union voltage_object *)
> > +
> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > +
> voltage_type,
> > +
> VOLTAGE_OBJ_SVID2);
> > +				if (voltage_object) {
> > +					*svd_gpio_id = voltage_object-
> >v3.asSVID2Obj.ucSVDGpioId;
> > +					*svc_gpio_id = voltage_object-
> >v3.asSVID2Obj.ucSVCGpioId;
> > +				} else {
> > +					return -EINVAL;
> > +				}
> > +				break;
> > +			default:
> > +				DRM_ERROR("unknown voltage object
> table\n");
> > +				return -EINVAL;
> > +			}
> > +			break;
> > +		default:
> > +			DRM_ERROR("unknown voltage object table\n");
> > +			return -EINVAL;
> > +		}
> > +
> > +	}
> > +	return 0;
> > +}
> > +
> > +bool
> > +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> > +				u8 voltage_type, u8 voltage_mode)
> > +{
> > +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > +	u8 frev, crev;
> > +	u16 data_offset, size;
> > +	union voltage_object_info *voltage_info;
> > +
> > +	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > +				   &frev, &crev, &data_offset)) {
> > +		voltage_info = (union voltage_object_info *)
> > +			(adev->mode_info.atom_context->bios +
> data_offset);
> > +
> > +		switch (frev) {
> > +		case 3:
> > +			switch (crev) {
> > +			case 1:
> > +				if
> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > +
> voltage_type, voltage_mode))
> > +					return true;
> > +				break;
> > +			default:
> > +				DRM_ERROR("unknown voltage object
> table\n");
> > +				return false;
> > +			}
> > +			break;
> > +		default:
> > +			DRM_ERROR("unknown voltage object table\n");
> > +			return false;
> > +		}
> > +
> > +	}
> > +	return false;
> > +}
> > +
> > +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> > +				      u8 voltage_type, u8 voltage_mode,
> > +				      struct atom_voltage_table *voltage_table)
> > +{
> > +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> > +	u8 frev, crev;
> > +	u16 data_offset, size;
> > +	int i;
> > +	union voltage_object_info *voltage_info;
> > +	union voltage_object *voltage_object = NULL;
> > +
> > +	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > +				   &frev, &crev, &data_offset)) {
> > +		voltage_info = (union voltage_object_info *)
> > +			(adev->mode_info.atom_context->bios +
> data_offset);
> > +
> > +		switch (frev) {
> > +		case 3:
> > +			switch (crev) {
> > +			case 1:
> > +				voltage_object = (union voltage_object *)
> > +
> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> > +
> voltage_type, voltage_mode);
> > +				if (voltage_object) {
> > +					ATOM_GPIO_VOLTAGE_OBJECT_V3
> *gpio =
> > +						&voltage_object-
> >v3.asGpioVoltageObj;
> > +					VOLTAGE_LUT_ENTRY_V2 *lut;
> > +					if (gpio->ucGpioEntryNum >
> MAX_VOLTAGE_ENTRIES)
> > +						return -EINVAL;
> > +					lut = &gpio->asVolGpioLut[0];
> > +					for (i = 0; i < gpio->ucGpioEntryNum;
> i++) {
> > +						voltage_table-
> >entries[i].value =
> > +							le16_to_cpu(lut-
> >usVoltageValue);
> > +						voltage_table-
> >entries[i].smio_low =
> > +							le32_to_cpu(lut-
> >ulVoltageId);
> > +						lut =
> (VOLTAGE_LUT_ENTRY_V2 *)
> > +							((u8 *)lut +
> sizeof(VOLTAGE_LUT_ENTRY_V2));
> > +					}
> > +					voltage_table->mask_low =
> le32_to_cpu(gpio->ulGpioMaskVal);
> > +					voltage_table->count = gpio-
> >ucGpioEntryNum;
> > +					voltage_table->phase_delay = gpio-
> >ucPhaseDelay;
> > +					return 0;
> > +				}
> > +				break;
> > +			default:
> > +				DRM_ERROR("unknown voltage object
> table\n");
> > +				return -EINVAL;
> > +			}
> > +			break;
> > +		default:
> > +			DRM_ERROR("unknown voltage object table\n");
> > +			return -EINVAL;
> > +		}
> > +	}
> > +	return -EINVAL;
> > +}
> > +
> > +union vram_info {
> > +	struct _ATOM_VRAM_INFO_V3 v1_3;
> > +	struct _ATOM_VRAM_INFO_V4 v1_4;
> > +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> > +};
> > +
> > +#define MEM_ID_MASK           0xff000000
> > +#define MEM_ID_SHIFT          24
> > +#define CLOCK_RANGE_MASK      0x00ffffff
> > +#define CLOCK_RANGE_SHIFT     0
> > +#define LOW_NIBBLE_MASK       0xf
> > +#define DATA_EQU_PREV         0
> > +#define DATA_FROM_TABLE       4
> > +
> > +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> > +				      u8 module_index,
> > +				      struct atom_mc_reg_table *reg_table)
> > +{
> > +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> > +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> > +	u32 i = 0, j;
> > +	u16 data_offset, size;
> > +	union vram_info *vram_info;
> > +
> > +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> > +
> > +	if (amdgpu_atom_parse_data_header(adev-
> >mode_info.atom_context, index, &size,
> > +				   &frev, &crev, &data_offset)) {
> > +		vram_info = (union vram_info *)
> > +			(adev->mode_info.atom_context->bios +
> data_offset);
> > +		switch (frev) {
> > +		case 1:
> > +			DRM_ERROR("old table version %d, %d\n", frev,
> crev);
> > +			return -EINVAL;
> > +		case 2:
> > +			switch (crev) {
> > +			case 1:
> > +				if (module_index < vram_info-
> >v2_1.ucNumOfVRAMModule) {
> > +					ATOM_INIT_REG_BLOCK *reg_block
> =
> > +						(ATOM_INIT_REG_BLOCK *)
> > +						((u8 *)vram_info +
> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> > +
> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> > +
> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> > +						((u8 *)reg_block + (2 *
> sizeof(u16)) +
> > +						 le16_to_cpu(reg_block-
> >usRegIndexTblSize));
> > +					ATOM_INIT_REG_INDEX_FORMAT
> *format = &reg_block->asRegIndexBuf[0];
> > +					num_entries =
> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> > +
> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> > +					if (num_entries >
> VBIOS_MC_REGISTER_ARRAY_SIZE)
> > +						return -EINVAL;
> > +					while (i < num_entries) {
> > +						if (format-
> >ucPreRegDataLength & ACCESS_PLACEHOLDER)
> > +							break;
> > +						reg_table-
> >mc_reg_address[i].s1 =
> > +
> 	(u16)(le16_to_cpu(format->usRegIndex));
> > +						reg_table-
> >mc_reg_address[i].pre_reg_data =
> > +							(u8)(format-
> >ucPreRegDataLength);
> > +						i++;
> > +						format =
> (ATOM_INIT_REG_INDEX_FORMAT *)
> > +							((u8 *)format +
> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> > +					}
> > +					reg_table->last = i;
> > +					while ((le32_to_cpu(*(u32
> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> > +					       (num_ranges <
> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> > +						t_mem_id =
> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> > +								>>
> MEM_ID_SHIFT);
> > +						if (module_index ==
> t_mem_id) {
> > +							reg_table-
> >mc_reg_table_entry[num_ranges].mclk_max =
> > +
> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> > +								      >>
> CLOCK_RANGE_SHIFT);
> > +							for (i = 0, j = 1; i <
> reg_table->last; i++) {
> > +								if ((reg_table-
> >mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> DATA_FROM_TABLE) {
> > +
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> > +
> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> > +									j++;
> > +								} else if
> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> DATA_EQU_PREV) {
> > +
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> > +
> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> > +								}
> > +							}
> > +							num_ranges++;
> > +						}
> > +						reg_data =
> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> > +							((u8 *)reg_data +
> le16_to_cpu(reg_block->usRegDataBlkSize));
> > +					}
> > +					if (le32_to_cpu(*(u32 *)reg_data) !=
> END_OF_REG_DATA_BLOCK)
> > +						return -EINVAL;
> > +					reg_table->num_entries =
> num_ranges;
> > +				} else
> > +					return -EINVAL;
> > +				break;
> > +			default:
> > +				DRM_ERROR("Unknown table
> version %d, %d\n", frev, crev);
> > +				return -EINVAL;
> > +			}
> > +			break;
> > +		default:
> > +			DRM_ERROR("Unknown table version %d, %d\n",
> frev, crev);
> > +			return -EINVAL;
> > +		}
> > +		return 0;
> > +	}
> > +	return -EINVAL;
> > +}
> > +
> > +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> > +{
> > +	const char *s;
> > +
> > +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> > +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> > +	default:
> > +		s = "none";
> > +		break;
> > +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> > +		s = "battery";
> > +		break;
> > +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> > +		s = "balanced";
> > +		break;
> > +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> > +		s = "performance";
> > +		break;
> > +	}
> > +	printk("\tui class: %s\n", s);
> > +	printk("\tinternal class:");
> > +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> > +	    (class2 == 0))
> > +		pr_cont(" none");
> > +	else {
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> > +			pr_cont(" boot");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> > +			pr_cont(" thermal");
> > +		if (class &
> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> > +			pr_cont(" limited_pwr");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> > +			pr_cont(" rest");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> > +			pr_cont(" forced");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> > +			pr_cont(" 3d_perf");
> > +		if (class &
> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> > +			pr_cont(" ovrdrv");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> > +			pr_cont(" uvd");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> > +			pr_cont(" 3d_low");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> > +			pr_cont(" acpi");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> > +			pr_cont(" uvd_hd2");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> > +			pr_cont(" uvd_hd");
> > +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> > +			pr_cont(" uvd_sd");
> > +		if (class2 &
> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> > +			pr_cont(" limited_pwr2");
> > +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> > +			pr_cont(" ulv");
> > +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> > +			pr_cont(" uvd_mvc");
> > +	}
> > +	pr_cont("\n");
> > +}
> > +
> > +void amdgpu_dpm_print_cap_info(u32 caps)
> > +{
> > +	printk("\tcaps:");
> > +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> > +		pr_cont(" single_disp");
> > +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> > +		pr_cont(" video");
> > +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> > +		pr_cont(" no_dc");
> > +	pr_cont("\n");
> > +}
> > +
> > +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> > +				struct amdgpu_ps *rps)
> > +{
> > +	printk("\tstatus:");
> > +	if (rps == adev->pm.dpm.current_ps)
> > +		pr_cont(" c");
> > +	if (rps == adev->pm.dpm.requested_ps)
> > +		pr_cont(" r");
> > +	if (rps == adev->pm.dpm.boot_ps)
> > +		pr_cont(" b");
> > +	pr_cont("\n");
> > +}
> > +
> > +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> > +{
> > +	int i;
> > +
> > +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> > +		return;
> > +
> > +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> > +		amdgpu_dpm_print_power_state(adev, &adev-
> >pm.dpm.ps[i]);
> > +
> > +}
> > +
> > +union power_info {
> > +	struct _ATOM_POWERPLAY_INFO info;
> > +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> > +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> > +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> > +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> > +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> > +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> > +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> > +};
> > +
> > +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> > +{
> > +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > +	union power_info *power_info;
> > +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > +	u16 data_offset;
> > +	u8 frev, crev;
> > +
> > +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > +				   &frev, &crev, &data_offset))
> > +		return -EINVAL;
> > +	power_info = (union power_info *)(mode_info->atom_context-
> >bios + data_offset);
> > +
> > +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >pplib.ulPlatformCaps);
> > +	adev->pm.dpm.backbias_response_time =
> le16_to_cpu(power_info->pplib.usBackbiasTime);
> > +	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
> >pplib.usVoltageTime);
> > +
> > +	return 0;
> > +}
> > +
> > +union fan_info {
> > +	struct _ATOM_PPLIB_FANTABLE fan;
> > +	struct _ATOM_PPLIB_FANTABLE2 fan2;
> > +	struct _ATOM_PPLIB_FANTABLE3 fan3;
> > +};
> > +
> > +static int amdgpu_parse_clk_voltage_dep_table(struct
> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> > +
> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> > +{
> > +	u32 size = atom_table->ucNumEntries *
> > +		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> > +	int i;
> > +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> > +
> > +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> > +	if (!amdgpu_table->entries)
> > +		return -ENOMEM;
> > +
> > +	entry = &atom_table->entries[0];
> > +	for (i = 0; i < atom_table->ucNumEntries; i++) {
> > +		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >usClockLow) |
> > +			(entry->ucClockHigh << 16);
> > +		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >usVoltage);
> > +		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> *)
> > +			((u8 *)entry +
> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> > +	}
> > +	amdgpu_table->count = atom_table->ucNumEntries;
> > +
> > +	return 0;
> > +}
> > +
> > +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> > +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> > +
> > +int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
> > +{
> > +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > +	union power_info *power_info;
> > +	union fan_info *fan_info;
> > +	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> > +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > +	u16 data_offset;
> > +	u8 frev, crev;
> > +	int ret, i;
> > +
> > +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > +				   &frev, &crev, &data_offset))
> > +		return -EINVAL;
> > +	power_info = (union power_info *)(mode_info->atom_context-
> >bios + data_offset);
> > +
> > +	/* fan table */
> > +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> > +		if (power_info->pplib3.usFanTableOffset) {
> > +			fan_info = (union fan_info *)(mode_info-
> >atom_context->bios + data_offset +
> > +						      le16_to_cpu(power_info-
> >pplib3.usFanTableOffset));
> > +			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> > +			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
> >fan.usTMin);
> > +			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
> >fan.usTMed);
> > +			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
> >fan.usTHigh);
> > +			adev->pm.dpm.fan.pwm_min =
> le16_to_cpu(fan_info->fan.usPWMMin);
> > +			adev->pm.dpm.fan.pwm_med =
> le16_to_cpu(fan_info->fan.usPWMMed);
> > +			adev->pm.dpm.fan.pwm_high =
> le16_to_cpu(fan_info->fan.usPWMHigh);
> > +			if (fan_info->fan.ucFanTableFormat >= 2)
> > +				adev->pm.dpm.fan.t_max =
> le16_to_cpu(fan_info->fan2.usTMax);
> > +			else
> > +				adev->pm.dpm.fan.t_max = 10900;
> > +			adev->pm.dpm.fan.cycle_delay = 100000;
> > +			if (fan_info->fan.ucFanTableFormat >= 3) {
> > +				adev->pm.dpm.fan.control_mode =
> fan_info->fan3.ucFanControlMode;
> > +				adev->pm.dpm.fan.default_max_fan_pwm
> =
> > +					le16_to_cpu(fan_info-
> >fan3.usFanPWMMax);
> > +				adev-
> >pm.dpm.fan.default_fan_output_sensitivity = 4836;
> > +				adev->pm.dpm.fan.fan_output_sensitivity =
> > +					le16_to_cpu(fan_info-
> >fan3.usFanOutputSensitivity);
> > +			}
> > +			adev->pm.dpm.fan.ucode_fan_control = true;
> > +		}
> > +	}
> > +
> > +	/* clock dependancy tables, shedding tables */
> > +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> > +		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> > +			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usVddcDependencyOnSCLKOffset));
> > +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddc_dependency_on_sclk,
> > +								 dep_table);
> > +			if (ret) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return ret;
> > +			}
> > +		}
> > +		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> > +			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usVddciDependencyOnMCLKOffset));
> > +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddci_dependency_on_mclk,
> > +								 dep_table);
> > +			if (ret) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return ret;
> > +			}
> > +		}
> > +		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> > +			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usVddcDependencyOnMCLKOffset));
> > +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.vddc_dependency_on_mclk,
> > +								 dep_table);
> > +			if (ret) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return ret;
> > +			}
> > +		}
> > +		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
> {
> > +			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usMvddDependencyOnMCLKOffset));
> > +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> > +								 dep_table);
> > +			if (ret) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return ret;
> > +			}
> > +		}
> > +		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> > +			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> > +				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usMaxClockVoltageOnDCOffset));
> > +			if (clk_v->ucNumEntries) {
> > +				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> > +					le16_to_cpu(clk_v-
> >entries[0].usSclkLow) |
> > +					(clk_v->entries[0].ucSclkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> > +					le16_to_cpu(clk_v-
> >entries[0].usMclkLow) |
> > +					(clk_v->entries[0].ucMclkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> > +					le16_to_cpu(clk_v-
> >entries[0].usVddc);
> > +				adev-
> >pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> > +					le16_to_cpu(clk_v-
> >entries[0].usVddci);
> > +			}
> > +		}
> > +		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
> {
> > +			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> > +				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib4.usVddcPhaseShedLimitsTableOffset));
> > +			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> > +
> > +			adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> > +				kcalloc(psl->ucNumEntries,
> > +					sizeof(struct
> amdgpu_phase_shedding_limits_entry),
> > +					GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +
> > +			entry = &psl->entries[0];
> > +			for (i = 0; i < psl->ucNumEntries; i++) {
> > +				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> > +					le16_to_cpu(entry->usSclkLow) |
> (entry->ucSclkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> > +					le16_to_cpu(entry->usMclkLow) |
> (entry->ucMclkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> > +					le16_to_cpu(entry->usVoltage);
> > +				entry =
> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> > +			}
> > +			adev-
> >pm.dpm.dyn_state.phase_shedding_limits_table.count =
> > +				psl->ucNumEntries;
> > +		}
> > +	}
> > +
> > +	/* cac data */
> > +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> > +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >pplib5.ulTDPLimit);
> > +		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
> >pplib5.ulNearTDPLimit);
> > +		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >pm.dpm.near_tdp_limit;
> > +		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
> >pplib5.usTDPODLimit);
> > +		if (adev->pm.dpm.tdp_od_limit)
> > +			adev->pm.dpm.power_control = true;
> > +		else
> > +			adev->pm.dpm.power_control = false;
> > +		adev->pm.dpm.tdp_adjustment = 0;
> > +		adev->pm.dpm.sq_ramping_threshold =
> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> > +		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
> >pplib5.ulCACLeakage);
> > +		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
> >pplib5.usLoadLineSlope);
> > +		if (power_info->pplib5.usCACLeakageTableOffset) {
> > +			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> > +				(ATOM_PPLIB_CAC_Leakage_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(power_info-
> >pplib5.usCACLeakageTableOffset));
> > +			ATOM_PPLIB_CAC_Leakage_Record *entry;
> > +			u32 size = cac_table->ucNumEntries * sizeof(struct
> amdgpu_cac_leakage_table);
> > +			adev->pm.dpm.dyn_state.cac_leakage_table.entries
> = kzalloc(size, GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			entry = &cac_table->entries[0];
> > +			for (i = 0; i < cac_table->ucNumEntries; i++) {
> > +				if (adev->pm.dpm.platform_caps &
> ATOM_PP_PLATFORM_CAP_EVV) {
> > +					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> > +						le16_to_cpu(entry-
> >usVddc1);
> > +					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> > +						le16_to_cpu(entry-
> >usVddc2);
> > +					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> > +						le16_to_cpu(entry-
> >usVddc3);
> > +				} else {
> > +					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> > +						le16_to_cpu(entry->usVddc);
> > +					adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> > +						le32_to_cpu(entry-
> >ulLeakageValue);
> > +				}
> > +				entry = (ATOM_PPLIB_CAC_Leakage_Record
> *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> > +			}
> > +			adev->pm.dpm.dyn_state.cac_leakage_table.count
> = cac_table->ucNumEntries;
> > +		}
> > +	}
> > +
> > +	/* ext tables */
> > +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> > +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> > +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> (ATOM_PPLIB_EXTENDEDHEADER *)
> > +			(mode_info->atom_context->bios + data_offset +
> > +			 le16_to_cpu(power_info-
> >pplib3.usExtendendedHeaderOffset));
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> > +			ext_hdr->usVCETableOffset) {
> > +			VCEClockInfoArray *array = (VCEClockInfoArray *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1);
> > +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> *limits =
> > +
> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1 +
> > +				 1 + array->ucNumEntries *
> sizeof(VCEClockInfo));
> > +			ATOM_PPLIB_VCE_State_Table *states =
> > +				(ATOM_PPLIB_VCE_State_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> 1 +
> > +				 1 + (array->ucNumEntries * sizeof
> (VCEClockInfo)) +
> > +				 1 + (limits->numEntries *
> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> > +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> *entry;
> > +			ATOM_PPLIB_VCE_State_Record *state_entry;
> > +			VCEClockInfo *vce_clk;
> > +			u32 size = limits->numEntries *
> > +				sizeof(struct
> amdgpu_vce_clock_voltage_dependency_entry);
> > +			adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> > +				kzalloc(size, GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> > +				limits->numEntries;
> > +			entry = &limits->entries[0];
> > +			state_entry = &states->entries[0];
> > +			for (i = 0; i < limits->numEntries; i++) {
> > +				vce_clk = (VCEClockInfo *)
> > +					((u8 *)&array->entries[0] +
> > +					 (entry->ucVCEClockInfoIndex *
> sizeof(VCEClockInfo)));
> > +				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> =
> > +					le16_to_cpu(vce_clk->usEVClkLow) |
> (vce_clk->ucEVClkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> =
> > +					le16_to_cpu(vce_clk->usECClkLow) |
> (vce_clk->ucECClkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> > +					le16_to_cpu(entry->usVoltage);
> > +				entry =
> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> > +			}
> > +			adev->pm.dpm.num_of_vce_states =
> > +					states->numEntries >
> AMD_MAX_VCE_LEVELS ?
> > +					AMD_MAX_VCE_LEVELS : states-
> >numEntries;
> > +			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
> {
> > +				vce_clk = (VCEClockInfo *)
> > +					((u8 *)&array->entries[0] +
> > +					 (state_entry->ucVCEClockInfoIndex
> * sizeof(VCEClockInfo)));
> > +				adev->pm.dpm.vce_states[i].evclk =
> > +					le16_to_cpu(vce_clk->usEVClkLow) |
> (vce_clk->ucEVClkHigh << 16);
> > +				adev->pm.dpm.vce_states[i].ecclk =
> > +					le16_to_cpu(vce_clk->usECClkLow) |
> (vce_clk->ucECClkHigh << 16);
> > +				adev->pm.dpm.vce_states[i].clk_idx =
> > +					state_entry->ucClockInfoIndex &
> 0x3f;
> > +				adev->pm.dpm.vce_states[i].pstate =
> > +					(state_entry->ucClockInfoIndex &
> 0xc0) >> 6;
> > +				state_entry =
> (ATOM_PPLIB_VCE_State_Record *)
> > +					((u8 *)state_entry +
> sizeof(ATOM_PPLIB_VCE_State_Record));
> > +			}
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> > +			ext_hdr->usUVDTableOffset) {
> > +			UVDClockInfoArray *array = (UVDClockInfoArray *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> 1);
> > +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> *limits =
> > +
> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> 1 +
> > +				 1 + (array->ucNumEntries * sizeof
> (UVDClockInfo)));
> > +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> *entry;
> > +			u32 size = limits->numEntries *
> > +				sizeof(struct
> amdgpu_uvd_clock_voltage_dependency_entry);
> > +			adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> > +				kzalloc(size, GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> > +				limits->numEntries;
> > +			entry = &limits->entries[0];
> > +			for (i = 0; i < limits->numEntries; i++) {
> > +				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> > +					((u8 *)&array->entries[0] +
> > +					 (entry->ucUVDClockInfoIndex *
> sizeof(UVDClockInfo)));
> > +				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> > +					le16_to_cpu(uvd_clk->usVClkLow) |
> (uvd_clk->ucVClkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> > +					le16_to_cpu(uvd_clk->usDClkLow) |
> (uvd_clk->ucDClkHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> > +					le16_to_cpu(entry->usVoltage);
> > +				entry =
> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> > +			}
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> > +			ext_hdr->usSAMUTableOffset) {
> > +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> > +				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
> + 1);
> > +			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> > +			u32 size = limits->numEntries *
> > +				sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> > +			adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> > +				kzalloc(size, GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> > +				limits->numEntries;
> > +			entry = &limits->entries[0];
> > +			for (i = 0; i < limits->numEntries; i++) {
> > +				adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> > +					le16_to_cpu(entry->usSAMClockLow)
> | (entry->ucSAMClockHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
> > +					le16_to_cpu(entry->usVoltage);
> > +				entry =
> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> > +			}
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> > +		    ext_hdr->usPPMTableOffset) {
> > +			ATOM_PPLIB_PPM_Table *ppm =
> (ATOM_PPLIB_PPM_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> > +			adev->pm.dpm.dyn_state.ppm_table =
> > +				kzalloc(sizeof(struct amdgpu_ppm_table),
> GFP_KERNEL);
> > +			if (!adev->pm.dpm.dyn_state.ppm_table) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			adev->pm.dpm.dyn_state.ppm_table->ppm_design
> = ppm->ucPpmDesign;
> > +			adev->pm.dpm.dyn_state.ppm_table-
> >cpu_core_number =
> > +				le16_to_cpu(ppm->usCpuCoreNumber);
> > +			adev->pm.dpm.dyn_state.ppm_table-
> >platform_tdp =
> > +				le32_to_cpu(ppm->ulPlatformTDP);
> > +			adev->pm.dpm.dyn_state.ppm_table-
> >small_ac_platform_tdp =
> > +				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> > +			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
> =
> > +				le32_to_cpu(ppm->ulPlatformTDC);
> > +			adev->pm.dpm.dyn_state.ppm_table-
> >small_ac_platform_tdc =
> > +				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> > +			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> > +				le32_to_cpu(ppm->ulApuTDP);
> > +			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> > +				le32_to_cpu(ppm->ulDGpuTDP);
> > +			adev->pm.dpm.dyn_state.ppm_table-
> >dgpu_ulv_power =
> > +				le32_to_cpu(ppm->ulDGpuUlvPower);
> > +			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> > +				le32_to_cpu(ppm->ulTjmax);
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> > +			ext_hdr->usACPTableOffset) {
> > +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> > +				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr->usACPTableOffset) +
> 1);
> > +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> > +			u32 size = limits->numEntries *
> > +				sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> > +			adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> > +				kzalloc(size, GFP_KERNEL);
> > +			if (!adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> > +				limits->numEntries;
> > +			entry = &limits->entries[0];
> > +			for (i = 0; i < limits->numEntries; i++) {
> > +				adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
> > +					le16_to_cpu(entry->usACPClockLow)
> | (entry->ucACPClockHigh << 16);
> > +				adev-
> >pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> > +					le16_to_cpu(entry->usVoltage);
> > +				entry =
> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> > +					((u8 *)entry +
> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> > +			}
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> > +			ext_hdr->usPowerTuneTableOffset) {
> > +			u8 rev = *(u8 *)(mode_info->atom_context->bios +
> data_offset +
> > +					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > +			ATOM_PowerTune_Table *pt;
> > +			adev->pm.dpm.dyn_state.cac_tdp_table =
> > +				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
> GFP_KERNEL);
> > +			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> > +
> 	amdgpu_free_extended_power_table(adev);
> > +				return -ENOMEM;
> > +			}
> > +			if (rev > 0) {
> > +				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> > +					(mode_info->atom_context->bios +
> data_offset +
> > +					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > +				adev->pm.dpm.dyn_state.cac_tdp_table-
> >maximum_power_delivery_limit =
> > +					ppt->usMaximumPowerDeliveryLimit;
> > +				pt = &ppt->power_tune_table;
> > +			} else {
> > +				ATOM_PPLIB_POWERTUNE_Table *ppt =
> (ATOM_PPLIB_POWERTUNE_Table *)
> > +					(mode_info->atom_context->bios +
> data_offset +
> > +					 le16_to_cpu(ext_hdr-
> >usPowerTuneTableOffset));
> > +				adev->pm.dpm.dyn_state.cac_tdp_table-
> >maximum_power_delivery_limit = 255;
> > +				pt = &ppt->power_tune_table;
> > +			}
> > +			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
> le16_to_cpu(pt->usTDP);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >configurable_tdp =
> > +				le16_to_cpu(pt->usConfigurableTDP);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
> le16_to_cpu(pt->usTDC);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >battery_power_limit =
> > +				le16_to_cpu(pt->usBatteryPowerLimit);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >small_power_limit =
> > +				le16_to_cpu(pt->usSmallPowerLimit);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >low_cac_leakage =
> > +				le16_to_cpu(pt->usLowCACLeakage);
> > +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >high_cac_leakage =
> > +				le16_to_cpu(pt->usHighCACLeakage);
> > +		}
> > +		if ((le16_to_cpu(ext_hdr->usSize) >=
> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> > +				ext_hdr->usSclkVddgfxTableOffset) {
> > +			dep_table =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> > +				(mode_info->atom_context->bios +
> data_offset +
> > +				 le16_to_cpu(ext_hdr-
> >usSclkVddgfxTableOffset));
> > +			ret = amdgpu_parse_clk_voltage_dep_table(
> > +					&adev-
> >pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> > +					dep_table);
> > +			if (ret) {
> > +				kfree(adev-
> >pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> > +				return ret;
> > +			}
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
> > +{
> > +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >pm.dpm.dyn_state;
> > +
> > +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> > +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> > +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> > +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> > +	kfree(dyn_state->cac_leakage_table.entries);
> > +	kfree(dyn_state->phase_shedding_limits_table.entries);
> > +	kfree(dyn_state->ppm_table);
> > +	kfree(dyn_state->cac_tdp_table);
> > +	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> > +	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> > +	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> > +	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> > +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> > +}
> > +
> > +static const char *pp_lib_thermal_controller_names[] = {
> > +	"NONE",
> > +	"lm63",
> > +	"adm1032",
> > +	"adm1030",
> > +	"max6649",
> > +	"lm64",
> > +	"f75375",
> > +	"RV6xx",
> > +	"RV770",
> > +	"adt7473",
> > +	"NONE",
> > +	"External GPIO",
> > +	"Evergreen",
> > +	"emc2103",
> > +	"Sumo",
> > +	"Northern Islands",
> > +	"Southern Islands",
> > +	"lm96163",
> > +	"Sea Islands",
> > +	"Kaveri/Kabini",
> > +};
> > +
> > +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> > +{
> > +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> > +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> > +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> > +	ATOM_PPLIB_THERMALCONTROLLER *controller;
> > +	struct amdgpu_i2c_bus_rec i2c_bus;
> > +	u16 data_offset;
> > +	u8 frev, crev;
> > +
> > +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> index, NULL,
> > +				   &frev, &crev, &data_offset))
> > +		return;
> > +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> > +		(mode_info->atom_context->bios + data_offset);
> > +	controller = &power_table->sThermalController;
> > +
> > +	/* add the i2c bus for thermal/fan chip */
> > +	if (controller->ucType > 0) {
> > +		if (controller->ucFanParameters &
> ATOM_PP_FANPARAMETERS_NOFAN)
> > +			adev->pm.no_fan = true;
> > +		adev->pm.fan_pulses_per_revolution =
> > +			controller->ucFanParameters &
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> ASK;
> > +		if (adev->pm.fan_pulses_per_revolution) {
> > +			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> > +			adev->pm.fan_max_rpm = controller-
> >ucFanMaxRPM;
> > +		}
> > +		if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_RV6XX;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_RV770) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_RV770;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_EVERGREEN;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_SUMO) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_SUMO;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> > +			DRM_INFO("Internal thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> > +		} else if (controller->ucType ==
> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> > +			DRM_INFO("External GPIO thermal controller %s fan
> control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_EXTERNAL_GPIO;
> > +		} else if (controller->ucType ==
> > +
> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> > +			DRM_INFO("ADT7473 with internal thermal
> controller %s fan control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> > +		} else if (controller->ucType ==
> > +
> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> > +			DRM_INFO("EMC2103 with internal thermal
> controller %s fan control\n",
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> > +		} else if (controller->ucType <
> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> > +			DRM_INFO("Possible %s thermal controller at
> 0x%02x %s fan control\n",
> > +
> pp_lib_thermal_controller_names[controller->ucType],
> > +				 controller->ucI2cAddress >> 1,
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +			adev->pm.int_thermal_type =
> THERMAL_TYPE_EXTERNAL;
> > +			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
> controller->ucI2cLine);
> > +			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
> &i2c_bus);
> > +			if (adev->pm.i2c_bus) {
> > +				struct i2c_board_info info = { };
> > +				const char *name =
> pp_lib_thermal_controller_names[controller->ucType];
> > +				info.addr = controller->ucI2cAddress >> 1;
> > +				strlcpy(info.type, name, sizeof(info.type));
> > +				i2c_new_client_device(&adev->pm.i2c_bus-
> >adapter, &info);
> > +			}
> > +		} else {
> > +			DRM_INFO("Unknown thermal controller type %d at
> 0x%02x %s fan control\n",
> > +				 controller->ucType,
> > +				 controller->ucI2cAddress >> 1,
> > +				 (controller->ucFanParameters &
> > +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> "without" : "with");
> > +		}
> > +	}
> > +}
> > +
> > +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32
> idx)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	if (idx < adev->pm.dpm.num_of_vce_states)
> > +		return &adev->pm.dpm.vce_states[idx];
> > +
> > +	return NULL;
> > +}
> > +
> > +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> amdgpu_device *adev,
> > +						     enum
> amd_pm_state_type dpm_state)
> > +{
> > +	int i;
> > +	struct amdgpu_ps *ps;
> > +	u32 ui_class;
> > +	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> > +		true : false;
> > +
> > +	/* check if the vblank period is too short to adjust the mclk */
> > +	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
> {
> > +		if (amdgpu_dpm_vblank_too_short(adev))
> > +			single_display = false;
> > +	}
> > +
> > +	/* certain older asics have a separare 3D performance state,
> > +	 * so try that first if the user selected performance
> > +	 */
> > +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> > +		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> > +	/* balanced states don't exist at the moment */
> > +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> > +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > +
> > +restart_search:
> > +	/* Pick the best power state based on current conditions */
> > +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> > +		ps = &adev->pm.dpm.ps[i];
> > +		ui_class = ps->class &
> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> > +		switch (dpm_state) {
> > +		/* user states */
> > +		case POWER_STATE_TYPE_BATTERY:
> > +			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> > +				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > +					if (single_display)
> > +						return ps;
> > +				} else
> > +					return ps;
> > +			}
> > +			break;
> > +		case POWER_STATE_TYPE_BALANCED:
> > +			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> > +				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > +					if (single_display)
> > +						return ps;
> > +				} else
> > +					return ps;
> > +			}
> > +			break;
> > +		case POWER_STATE_TYPE_PERFORMANCE:
> > +			if (ui_class ==
> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> > +				if (ps->caps &
> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> > +					if (single_display)
> > +						return ps;
> > +				} else
> > +					return ps;
> > +			}
> > +			break;
> > +		/* internal states */
> > +		case POWER_STATE_TYPE_INTERNAL_UVD:
> > +			if (adev->pm.dpm.uvd_ps)
> > +				return adev->pm.dpm.uvd_ps;
> > +			else
> > +				break;
> > +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> > +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> > +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> > +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> > +			if (ps->class2 &
> ATOM_PPLIB_CLASSIFICATION2_MVC)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_BOOT:
> > +			return adev->pm.dpm.boot_ps;
> > +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> > +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_ACPI:
> > +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_ULV:
> > +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> > +				return ps;
> > +			break;
> > +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> > +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> > +				return ps;
> > +			break;
> > +		default:
> > +			break;
> > +		}
> > +	}
> > +	/* use a fallback state if we didn't match */
> > +	switch (dpm_state) {
> > +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> > +		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> > +		goto restart_search;
> > +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> > +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> > +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> > +		if (adev->pm.dpm.uvd_ps) {
> > +			return adev->pm.dpm.uvd_ps;
> > +		} else {
> > +			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > +			goto restart_search;
> > +		}
> > +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> > +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> > +		goto restart_search;
> > +	case POWER_STATE_TYPE_INTERNAL_ACPI:
> > +		dpm_state = POWER_STATE_TYPE_BATTERY;
> > +		goto restart_search;
> > +	case POWER_STATE_TYPE_BATTERY:
> > +	case POWER_STATE_TYPE_BALANCED:
> > +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> > +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> > +		goto restart_search;
> > +	default:
> > +		break;
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +int amdgpu_dpm_change_power_state_locked(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	struct amdgpu_ps *ps;
> > +	enum amd_pm_state_type dpm_state;
> > +	int ret;
> > +	bool equal = false;
> > +
> > +	/* if dpm init failed */
> > +	if (!adev->pm.dpm_enabled)
> > +		return 0;
> > +
> > +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> > +		/* add other state override checks here */
> > +		if ((!adev->pm.dpm.thermal_active) &&
> > +		    (!adev->pm.dpm.uvd_active))
> > +			adev->pm.dpm.state = adev->pm.dpm.user_state;
> > +	}
> > +	dpm_state = adev->pm.dpm.state;
> > +
> > +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> > +	if (ps)
> > +		adev->pm.dpm.requested_ps = ps;
> > +	else
> > +		return -EINVAL;
> > +
> > +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >print_power_state) {
> > +		printk("switching from power state:\n");
> > +		amdgpu_dpm_print_power_state(adev, adev-
> >pm.dpm.current_ps);
> > +		printk("switching to power state:\n");
> > +		amdgpu_dpm_print_power_state(adev, adev-
> >pm.dpm.requested_ps);
> > +	}
> > +
> > +	/* update whether vce is active */
> > +	ps->vce_active = adev->pm.dpm.vce_active;
> > +	if (adev->powerplay.pp_funcs->display_configuration_changed)
> > +		amdgpu_dpm_display_configuration_changed(adev);
> > +
> > +	ret = amdgpu_dpm_pre_set_power_state(adev);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (adev->powerplay.pp_funcs->check_state_equal) {
> > +		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
> >pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> > +			equal = false;
> > +	}
> > +
> > +	if (equal)
> > +		return 0;
> > +
> > +	if (adev->powerplay.pp_funcs->set_power_state)
> > +		adev->powerplay.pp_funcs->set_power_state(adev-
> >powerplay.pp_handle);
> > +
> > +	amdgpu_dpm_post_set_power_state(adev);
> > +
> > +	adev->pm.dpm.current_active_crtcs = adev-
> >pm.dpm.new_active_crtcs;
> > +	adev->pm.dpm.current_active_crtc_count = adev-
> >pm.dpm.new_active_crtc_count;
> > +
> > +	if (adev->powerplay.pp_funcs->force_performance_level) {
> > +		if (adev->pm.dpm.thermal_active) {
> > +			enum amd_dpm_forced_level level = adev-
> >pm.dpm.forced_level;
> > +			/* force low perf level for thermal */
> > +			amdgpu_dpm_force_performance_level(adev,
> AMD_DPM_FORCED_LEVEL_LOW);
> > +			/* save the user's level */
> > +			adev->pm.dpm.forced_level = level;
> > +		} else {
> > +			/* otherwise, user selected level */
> > +			amdgpu_dpm_force_performance_level(adev,
> adev->pm.dpm.forced_level);
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> > new file mode 100644
> > index 000000000000..4adc765c8824
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> > @@ -0,0 +1,70 @@
> > +/*
> > + * Copyright 2021 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +#ifndef __LEGACY_DPM_H__
> > +#define __LEGACY_DPM_H__
> > +
> > +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
> *adev,
> > +					    u32 clock,
> > +					    bool strobe_mode,
> > +					    struct atom_mpll_param
> *mpll_param);
> > +
> > +void amdgpu_atombios_set_engine_dram_timings(struct
> amdgpu_device *adev,
> > +					     u32 eng_clock, u32 mem_clock);
> > +
> > +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> *adev,
> > +					  u16 *vddc, u16 *vddci, u16 *mvdd);
> > +
> > +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> voltage_type,
> > +			     u16 voltage_id, u16 *voltage);
> > +
> > +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> amdgpu_device *adev,
> > +						      u16 *voltage,
> > +						      u16 leakage_idx);
> > +
> > +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> > +			      u8 voltage_type,
> > +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> > +
> > +bool
> > +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> > +				u8 voltage_type, u8 voltage_mode);
> > +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> > +				      u8 voltage_type, u8 voltage_mode,
> > +				      struct atom_voltage_table
> *voltage_table);
> > +
> > +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> > +				      u8 module_index,
> > +				      struct atom_mc_reg_table *reg_table);
> > +
> > +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> > +void amdgpu_dpm_print_cap_info(u32 caps);
> > +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> > +				struct amdgpu_ps *rps);
> > +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> > +int amdgpu_parse_extended_power_table(struct amdgpu_device
> *adev);
> > +void amdgpu_free_extended_power_table(struct amdgpu_device
> *adev);
> > +void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> > +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32
> idx);
> > +int amdgpu_dpm_change_power_state_locked(void *handle);
> > +void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> > +#endif
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > index 4f84d8b893f1..a2881c90d187 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > @@ -37,6 +37,7 @@
> >   #include <linux/math64.h>
> >   #include <linux/seq_file.h>
> >   #include <linux/firmware.h>
> > +#include <legacy_dpm.h>
> >
> >   #define MC_CG_ARB_FREQ_F0           0x0a
> >   #define MC_CG_ARB_FREQ_F1           0x0b
> > @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs si_dpm_funcs
> = {
> >   	.check_state_equal = &si_check_state_equal,
> >   	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >   	.read_sensor = &si_dpm_read_sensor,
> > +	.change_power_state = amdgpu_dpm_change_power_state_locked,
> >   };
> >
> >   static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-12-01  1:59     ` Quan, Evan
@ 2021-12-01  3:33       ` Lazar, Lijo
  2021-12-01  7:07         ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-12-01  3:33 UTC (permalink / raw)
  To: Quan, Evan, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian



On 12/1/2021 7:29 AM, Quan, Evan wrote:
> [AMD Official Use Only]
> 
> 
> 
>> -----Original Message-----
>> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of
>> Lazar, Lijo
>> Sent: Tuesday, November 30, 2021 4:10 PM
>> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Feng, Kenneth
>> <Kenneth.Feng@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
>> Subject: Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation
>> details to other blocks out of power
>>
>>
>>
>> On 11/30/2021 1:12 PM, Evan Quan wrote:
>>> Those implementation details(whether swsmu supported, some ppt_funcs
>>> supported, accessing internal statistics ...)should be kept
>>> internally. It's not a good practice and even error prone to expose
>> implementation details.
>>>
>>> Signed-off-by: Evan Quan <evan.quan@amd.com>
>>> Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
>>>    drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
>>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
>>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95
>> +++++++++++++++++++
>>>    drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
>>>    drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
>>>    drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
>>>    13 files changed, 155 insertions(+), 64 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>>> b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>>> index bcfdb63b1d42..a545df4efce1 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>>> @@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct
>> amdgpu_device *adev)
>>>    	adev->gfx.rlc.funcs->resume(adev);
>>>
>>>    	/* Wait for FW reset event complete */
>>> -	r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
>>> +	r = amdgpu_dpm_wait_for_event(adev,
>> SMU_EVENT_RESET_COMPLETE, 0);
>>>    	if (r) {
>>>    		dev_err(adev->dev,
>>>    			"Failed to get response from firmware after reset\n");
>> diff --git
>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> index 164d6a9e9fbb..0d1f00b24aae 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> @@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void *data,
>> u64 val)
>>>    		return ret;
>>>    	}
>>>
>>> -	if (is_support_sw_smu(adev)) {
>>> -		ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
>> &min_freq, &max_freq);
>>> -		if (ret || val > max_freq || val < min_freq)
>>> -			return -EINVAL;
>>> -		ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
>> (uint32_t)val, (uint32_t)val);
>>> -	} else {
>>> -		return 0;
>>> +	ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK,
>> &min_freq, &max_freq);
>>> +	if (ret == -EOPNOTSUPP) {
>>> +		ret = 0;
>>> +		goto out;
>>>    	}
>>> +	if (ret || val > max_freq || val < min_freq) {
>>> +		ret = -EINVAL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK,
>> (uint32_t)val, (uint32_t)val);
>>> +	if (ret)
>>> +		ret = -EINVAL;
>>>
>>> +out:
>>>    	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>>>    	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>>>
>>> -	if (ret)
>>> -		return -EINVAL;
>>> -
>>> -	return 0;
>>> +	return ret;
>>>    }
>>>
>>>    DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL, diff --git
>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index 1989f9e9379e..41cc1ffb5809 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct
>> amdgpu_device *adev)
>>>    	if (adev->asic_type == CHIP_ARCTURUS &&
>>>    	    amdgpu_passthrough(adev) &&
>>>    	    adev->gmc.xgmi.num_physical_nodes > 1)
>>> -		smu_set_light_sbr(&adev->smu, true);
>>> +		amdgpu_dpm_set_light_sbr(adev, true);
>>>
>>>    	if (adev->gmc.xgmi.num_physical_nodes > 1) {
>>>    		mutex_lock(&mgpu_info.mutex);
>>> @@ -2857,7 +2857,7 @@ static int
>> amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
>>>    	int i, r;
>>>
>>>    	if (adev->in_s0ix)
>>> -		amdgpu_gfx_state_change_set(adev,
>> sGpuChangeState_D3Entry);
>>> +		amdgpu_dpm_gfx_state_change(adev,
>> sGpuChangeState_D3Entry);
>>>
>>>    	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>>>    		if (!adev->ip_blocks[i].status.valid)
>>> @@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device
>> *dev, bool fbcon)
>>>    		return 0;
>>>
>>>    	if (adev->in_s0ix)
>>> -		amdgpu_gfx_state_change_set(adev,
>> sGpuChangeState_D0Entry);
>>> +		amdgpu_dpm_gfx_state_change(adev,
>> sGpuChangeState_D0Entry);
>>>
>>>    	/* post card */
>>>    	if (amdgpu_device_need_post(adev)) { diff --git
>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> index 1916ec84dd71..3d8f82dc8c97 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> @@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct
>> amdgpu_device
>>> *adev, uint32_t *value)
>>>
>>>    	mutex_lock(&adev->gfx.gfx_off_mutex);
>>>
>>> -	r = smu_get_status_gfxoff(adev, value);
>>> +	r = amdgpu_dpm_get_status_gfxoff(adev, value);
>>>
>>>    	mutex_unlock(&adev->gfx.gfx_off_mutex);
>>>
>>> @@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct
>> amdgpu_device *adev)
>>>    	}
>>>    	return amdgpu_num_kcq;
>>>    }
>>> -
>>> -/* amdgpu_gfx_state_change_set - Handle gfx power state change set
>>> - * @adev: amdgpu_device pointer
>>> - * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
>>> -sGpuChangeState_D3Entry)
>>> - *
>>> - */
>>> -
>>> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum
>>> gfx_change_state state) -{
>>> -	mutex_lock(&adev->pm.mutex);
>>> -	if (adev->powerplay.pp_funcs &&
>>> -	    adev->powerplay.pp_funcs->gfx_state_change_set)
>>> -		((adev)->powerplay.pp_funcs->gfx_state_change_set(
>>> -			(adev)->powerplay.pp_handle, state));
>>> -	mutex_unlock(&adev->pm.mutex);
>>> -}
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>>> index f851196c83a5..776c886fd94a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>>> @@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
>>>    	AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
>>>    };
>>>
>>> -/* Argument for PPSMC_MSG_GpuChangeState */ -enum
>> gfx_change_state {
>>> -	sGpuChangeState_D0Entry = 1,
>>> -	sGpuChangeState_D3Entry,
>>> -};
>>> -
>>>    #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
>>>    #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
>>>
>>> @@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct
>> amdgpu_device *adev,
>>>    uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
>>>    void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg,
>> uint32_t v);
>>>    int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev); -void
>>> amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum
>> gfx_change_state state);
>>>    #endif
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>>> index 3c623e589b79..35c4aec04a7e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>>> @@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct
>> amdgpu_device *adev, struct ras_err_d
>>>    	 * choosing right query method according to
>>>    	 * whether smu support query error information
>>>    	 */
>>> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
>>> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
>>>    	if (ret == -EOPNOTSUPP) {
>>>    		if (adev->umc.ras_funcs &&
>>>    			adev->umc.ras_funcs->query_ras_error_count)
>>> @@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct
>> amdgpu_device *adev)
>>>    		if (ret)
>>>    			goto free;
>>>
>>> -		if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num)
>>> -			adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num(&adev->smu, con-
>>> eeprom_control.ras_num_recs);
>>> +		amdgpu_dpm_send_hbm_bad_pages_num(adev,
>>> +con->eeprom_control.ras_num_recs);
>>>    	}
>>>
>>>    #ifdef CONFIG_X86_MCE_AMD
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>>> index 6e4bea012ea4..5fed26c8db44 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>>> @@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct
>> amdgpu_device *adev,
>>>    	int ret = 0;
>>>
>>>    	kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
>>> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
>>> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
>>>    	if (ret == -EOPNOTSUPP) {
>>>    		if (adev->umc.ras_funcs &&
>>>    		    adev->umc.ras_funcs->query_ras_error_count)
>>> @@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct
>> amdgpu_device *adev,
>>>    						err_data->err_addr_cnt);
>>>    			amdgpu_ras_save_bad_pages(adev);
>>>
>>> -			if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num)
>>> -				adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num(&adev->smu, con-
>>> eeprom_control.ras_num_recs);
>>> +			amdgpu_dpm_send_hbm_bad_pages_num(adev,
>>> +con->eeprom_control.ras_num_recs);
>>>    		}
>>>
>>>    		amdgpu_ras_reset_gpu(adev);
>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> index deae12dc777d..329a4c89f1e6 100644
>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>>> @@ -222,7 +222,7 @@ void
>>> kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
>>>
>>>    	len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
>>>    		       KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
>>> -		       atomic64_read(&dev->adev->smu.throttle_int_counter));
>>> +		       amdgpu_dpm_get_thermal_throttling_counter(dev-
>>> adev));
>>>
>>>    	add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,
>> 	fifo_in, len);
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> index 5c0867ebcfce..2e295facd086 100644
>>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> @@ -26,6 +26,10 @@
>>>
>>>    extern const struct amdgpu_ip_block_version pp_smu_ip_block;
>>>
>>> +enum smu_event_type {
>>> +	SMU_EVENT_RESET_COMPLETE = 0,
>>> +};
>>> +
>>>    struct amd_vce_state {
>>>    	/* vce clocks */
>>>    	u32 evclk;
>>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> index 08362d506534..9b332c8a0079 100644
>>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> @@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct
>>> amdgpu_device *adev, uint32_t *smu_versio
>>>
>>>    	return 0;
>>>    }
>>> +
>>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
>> enable)
>>> +{
>>> +	return smu_set_light_sbr(&adev->smu, enable); }
>>> +
>>> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
>> *adev,
>>> +uint32_t size) {
>>> +	int ret = 0;
>>> +
>>> +	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num)
>>> +		ret = adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num(&adev->smu,
>>> +size);
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
>>> +				  enum pp_clock_type type,
>>> +				  uint32_t *min,
>>> +				  uint32_t *max)
>>> +{
>>> +	if (!is_support_sw_smu(adev))
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	switch (type) {
>>> +	case PP_SCLK:
>>> +		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
>> min, max);
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +}
>>> +
>>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>>> +				   enum pp_clock_type type,
>>> +				   uint32_t min,
>>> +				   uint32_t max)
>>> +{
>>> +	if (!is_support_sw_smu(adev))
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	switch (type) {
>>> +	case PP_SCLK:
>>> +		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
>> min, max);
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +}
>>> +
>>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
>>> +			      enum smu_event_type event,
>>> +			      uint64_t event_arg)
>>> +{
>>> +	if (!is_support_sw_smu(adev))
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	return smu_wait_for_event(&adev->smu, event, event_arg); }
>>> +
>>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
>> uint32_t
>>> +*value) {
>>> +	if (!is_support_sw_smu(adev))
>>> +		return -EOPNOTSUPP;
>>> +
>>> +	return smu_get_status_gfxoff(&adev->smu, value); }
>>> +
>>> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
>>> +amdgpu_device *adev) {
>>> +	return atomic64_read(&adev->smu.throttle_int_counter);
>>> +}
>>> +
>>> +/* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
>>> + * @adev: amdgpu_device pointer
>>> + * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
>>> +-sGpuChangeState_D3Entry)
>>> + *
>>> + */
>>> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
>>> +				 enum gfx_change_state state)
>>> +{
>>> +	mutex_lock(&adev->pm.mutex);
>>> +	if (adev->powerplay.pp_funcs &&
>>> +	    adev->powerplay.pp_funcs->gfx_state_change_set)
>>> +		((adev)->powerplay.pp_funcs->gfx_state_change_set(
>>> +			(adev)->powerplay.pp_handle, state));
>>> +	mutex_unlock(&adev->pm.mutex);
>>> +}
>>> +
>>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>>> +			    void *umc_ecc)
>>> +{
>>> +	if (!is_support_sw_smu(adev))
>>> +		return -EOPNOTSUPP;
>>> +
>>
>> In general, I don't think we need to keep this check everywhere to make
>> amdgpu_dpm_* backwards compatible.  The usage is also inconsistent. For
>> ex: amdgpu_dpm_get_thermal_throttling_counter doesn't have any
>> is_support_sw_smu check whereas amdgpu_dpm_get_ecc_info() has it.
>> There is no reason to keep adding is_support_sw_smu() check for every new
>> public API. For sure, they are not going to work with powerplay subsystem.
>>
>> I would rather prefer to leave old things and create amdgpu_smu_* for
>> anything which is supported only in smu subsystem. It's easier to read from
>> code perspective also - separate the ones which is supported by smu
>> component and not supported in older powerplay components.
>>
>> Only for the common ones that are supported in powerplay and smu, keep
>> amdgpu_dpm_*, for others preference would be to keep amdgpu_smu_*.
> [Quan, Evan] I get your point. However, then it will bring back the problem we are trying to avoid.
> That is the caller need to know whether the amdgpu_smu_* can be used. They need to know whether the swsmu framework is supported on some ASIC. >

swsmu has been for sometime. I'm suggesting to move away from 
amdgpu_dpm_* which is the legacy interface. There is no need to add new 
dpm_* APIs which are supported only in swsmu component. We only need to 
maintain the existing usage of dpm_*.

For the newer ones, let us move to component based APIs like 
amdgpu_smu_*. All the clients of swsmu are part of this component based 
architecture and they need to be aware of services of swsmu. It is 
similar to what is followed in other components like amdgpu_gmc_*, 
amdgpu_vcn* etc.

Thanks,
Lijo

> And yes, there is some inconsistent cases existing in current power code. Maybe we can create new patche(s) to fix them?
> For this patch series, I would like to avoid any real code logic change.
> 
> BR
> Evan
>>
>> Thanks,
>> Lijo
>>
>>> +	return smu_get_ecc_info(&adev->smu, umc_ecc); }
>>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> index 16e3f72d31b9..7289d379a9fb 100644
>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> @@ -23,6 +23,12 @@
>>>    #ifndef __AMDGPU_DPM_H__
>>>    #define __AMDGPU_DPM_H__
>>>
>>> +/* Argument for PPSMC_MSG_GpuChangeState */ enum
>> gfx_change_state {
>>> +	sGpuChangeState_D0Entry = 1,
>>> +	sGpuChangeState_D3Entry,
>>> +};
>>> +
>>>    enum amdgpu_int_thermal_type {
>>>    	THERMAL_TYPE_NONE,
>>>    	THERMAL_TYPE_EXTERNAL,
>>> @@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct
>> amdgpu_device *adev, bool enable);
>>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
>> enable);
>>>    void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>>    int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
>> uint32_t
>>> *smu_version);
>>> -
>>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
>>> +enable); int amdgpu_dpm_send_hbm_bad_pages_num(struct
>> amdgpu_device
>>> +*adev, uint32_t size); int amdgpu_dpm_get_dpm_freq_range(struct
>> amdgpu_device *adev,
>>> +				       enum pp_clock_type type,
>>> +				       uint32_t *min,
>>> +				       uint32_t *max);
>>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>>> +				        enum pp_clock_type type,
>>> +				        uint32_t min,
>>> +				        uint32_t max);
>>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum
>> smu_event_type event,
>>> +		       uint64_t event_arg);
>>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
>> uint32_t
>>> +*value); uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
>>> +amdgpu_device *adev); void amdgpu_dpm_gfx_state_change(struct
>> amdgpu_device *adev,
>>> +				 enum gfx_change_state state);
>>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>>> +			    void *umc_ecc);
>>>    #endif
>>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> index f738f7dc20c9..29791bb21fba 100644
>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> @@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
>>>    	uint32_t clk_dependency;
>>>    };
>>>
>>> -enum smu_event_type {
>>> -
>>> -	SMU_EVENT_RESET_COMPLETE = 0,
>>> -};
>>> -
>>>    #define SMU_TABLE_INIT(tables, table_id, s, a, d)	\
>>>    	do {						\
>>>    		tables[table_id].size = s;		\
>>> @@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context *smu);
>>>
>>>    int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
>>>
>>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
>>> *value);
>>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value);
>>>
>>>    int smu_set_light_sbr(struct smu_context *smu, bool enable);
>>>
>>> -int smu_wait_for_event(struct amdgpu_device *adev, enum
>>> smu_event_type event,
>>> +int smu_wait_for_event(struct smu_context *smu, enum
>> smu_event_type
>>> +event,
>>>    		       uint64_t event_arg);
>>>    int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
>>>    int smu_stb_collect_info(struct smu_context *smu, void *buff,
>>> uint32_t size); diff --git
>> a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> index 5839918cb574..ef7d0e377965 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> @@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void
>> *handle,
>>>    	return ret;
>>>    }
>>>
>>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
>>> *value)
>>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
>>>    {
>>> -	int ret = 0;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	if (!smu->ppt_funcs->get_gfx_off_status)
>>> +		return -EINVAL;
>>>
>>> -	if (is_support_sw_smu(adev) && smu->ppt_funcs-
>>> get_gfx_off_status)
>>> -		*value = smu_get_gfx_off_status(smu);
>>> -	else
>>> -		ret = -EINVAL;
>>> +	*value = smu_get_gfx_off_status(smu);
>>>
>>> -	return ret;
>>> +	return 0;
>>>    }
>>>
>>>    int smu_set_soft_freq_range(struct smu_context *smu, @@ -3167,11
>>> +3164,10 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
>>>    	.get_smu_prv_buf_details = smu_get_prv_buffer_details,
>>>    };
>>>
>>> -int smu_wait_for_event(struct amdgpu_device *adev, enum
>>> smu_event_type event,
>>> +int smu_wait_for_event(struct smu_context *smu, enum
>> smu_event_type
>>> +event,
>>>    		       uint64_t event_arg)
>>>    {
>>>    	int ret = -EINVAL;
>>> -	struct smu_context *smu = &adev->smu;
>>>
>>>    	if (smu->ppt_funcs->wait_for_event) {
>>>    		mutex_lock(&smu->mutex);
>>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-11-30  8:09   ` Lazar, Lijo
  2021-12-01  1:59     ` Quan, Evan
@ 2021-12-01  3:37     ` Lazar, Lijo
  1 sibling, 0 replies; 44+ messages in thread
From: Lazar, Lijo @ 2021-12-01  3:37 UTC (permalink / raw)
  To: Evan Quan, amd-gfx; +Cc: Alexander.Deucher, Kenneth.Feng, christian.koenig



On 11/30/2021 1:39 PM, Lazar, Lijo wrote:
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
>> Those implementation details(whether swsmu supported, some ppt_funcs 
>> supported,
>> accessing internal statistics ...)should be kept internally. It's not 
>> a good
>> practice and even error prone to expose implementation details.
>>
>> Signed-off-by: Evan Quan <evan.quan@amd.com>
>> Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
>> ---
>>   drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
>>   drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
>>   .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
>>   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95 +++++++++++++++++++
>>   drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
>>   drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
>>   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
>>   13 files changed, 155 insertions(+), 64 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c 
>> b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>> index bcfdb63b1d42..a545df4efce1 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
>> @@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct 
>> amdgpu_device *adev)
>>       adev->gfx.rlc.funcs->resume(adev);
>>       /* Wait for FW reset event complete */
>> -    r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
>> +    r = amdgpu_dpm_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
>>       if (r) {
>>           dev_err(adev->dev,
>>               "Failed to get response from firmware after reset\n");
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>> index 164d6a9e9fbb..0d1f00b24aae 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>> @@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void *data, 
>> u64 val)
>>           return ret;
>>       }
>> -    if (is_support_sw_smu(adev)) {
>> -        ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, &min_freq, 
>> &max_freq);
>> -        if (ret || val > max_freq || val < min_freq)
>> -            return -EINVAL;
>> -        ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK, 
>> (uint32_t)val, (uint32_t)val);
>> -    } else {
>> -        return 0;
>> +    ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK, &min_freq, 
>> &max_freq);
>> +    if (ret == -EOPNOTSUPP) {
>> +        ret = 0;
>> +        goto out;
>>       }
>> +    if (ret || val > max_freq || val < min_freq) {
>> +        ret = -EINVAL;
>> +        goto out;
>> +    }
>> +
>> +    ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK, 
>> (uint32_t)val, (uint32_t)val);
>> +    if (ret)
>> +        ret = -EINVAL;
>> +out:
>>       pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
>>       pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
>> -    if (ret)
>> -        return -EINVAL;
>> -
>> -    return 0;
>> +    return ret;
>>   }
>>   DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL,
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 1989f9e9379e..41cc1ffb5809 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct 
>> amdgpu_device *adev)
>>       if (adev->asic_type == CHIP_ARCTURUS &&
>>           amdgpu_passthrough(adev) &&
>>           adev->gmc.xgmi.num_physical_nodes > 1)
>> -        smu_set_light_sbr(&adev->smu, true);
>> +        amdgpu_dpm_set_light_sbr(adev, true);
>>       if (adev->gmc.xgmi.num_physical_nodes > 1) {
>>           mutex_lock(&mgpu_info.mutex);
>> @@ -2857,7 +2857,7 @@ static int 
>> amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
>>       int i, r;
>>       if (adev->in_s0ix)
>> -        amdgpu_gfx_state_change_set(adev, sGpuChangeState_D3Entry);
>> +        amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D3Entry);
>>       for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
>>           if (!adev->ip_blocks[i].status.valid)
>> @@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device *dev, 
>> bool fbcon)
>>           return 0;
>>       if (adev->in_s0ix)
>> -        amdgpu_gfx_state_change_set(adev, sGpuChangeState_D0Entry);
>> +        amdgpu_dpm_gfx_state_change(adev, sGpuChangeState_D0Entry);
>>       /* post card */
>>       if (amdgpu_device_need_post(adev)) {
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> index 1916ec84dd71..3d8f82dc8c97 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> @@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct amdgpu_device 
>> *adev, uint32_t *value)
>>       mutex_lock(&adev->gfx.gfx_off_mutex);
>> -    r = smu_get_status_gfxoff(adev, value);
>> +    r = amdgpu_dpm_get_status_gfxoff(adev, value);
>>       mutex_unlock(&adev->gfx.gfx_off_mutex);
>> @@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct amdgpu_device 
>> *adev)
>>       }
>>       return amdgpu_num_kcq;
>>   }
>> -
>> -/* amdgpu_gfx_state_change_set - Handle gfx power state change set
>> - * @adev: amdgpu_device pointer
>> - * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 
>> -sGpuChangeState_D3Entry)
>> - *
>> - */
>> -
>> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum 
>> gfx_change_state state)
>> -{
>> -    mutex_lock(&adev->pm.mutex);
>> -    if (adev->powerplay.pp_funcs &&
>> -        adev->powerplay.pp_funcs->gfx_state_change_set)
>> -        ((adev)->powerplay.pp_funcs->gfx_state_change_set(
>> -            (adev)->powerplay.pp_handle, state));
>> -    mutex_unlock(&adev->pm.mutex);
>> -}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> index f851196c83a5..776c886fd94a 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> @@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
>>       AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
>>   };
>> -/* Argument for PPSMC_MSG_GpuChangeState */
>> -enum gfx_change_state {
>> -    sGpuChangeState_D0Entry = 1,
>> -    sGpuChangeState_D3Entry,
>> -};
>> -
>>   #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
>>   #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
>> @@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct 
>> amdgpu_device *adev,
>>   uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
>>   void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, 
>> uint32_t v);
>>   int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev);
>> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum 
>> gfx_change_state state);
>>   #endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>> index 3c623e589b79..35c4aec04a7e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
>> @@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct 
>> amdgpu_device *adev, struct ras_err_d
>>        * choosing right query method according to
>>        * whether smu support query error information
>>        */
>> -    ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
>> +    ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
>>       if (ret == -EOPNOTSUPP) {
>>           if (adev->umc.ras_funcs &&
>>               adev->umc.ras_funcs->query_ras_error_count)
>> @@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct 
>> amdgpu_device *adev)
>>           if (ret)
>>               goto free;
>> -        if (adev->smu.ppt_funcs && 
>> adev->smu.ppt_funcs->send_hbm_bad_pages_num)
>> -            adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, 
>> con->eeprom_control.ras_num_recs);
>> +        amdgpu_dpm_send_hbm_bad_pages_num(adev, 
>> con->eeprom_control.ras_num_recs);
>>       }
>>   #ifdef CONFIG_X86_MCE_AMD
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>> index 6e4bea012ea4..5fed26c8db44 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
>> @@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct 
>> amdgpu_device *adev,
>>       int ret = 0;
>>       kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
>> -    ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
>> +    ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
>>       if (ret == -EOPNOTSUPP) {
>>           if (adev->umc.ras_funcs &&
>>               adev->umc.ras_funcs->query_ras_error_count)
>> @@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct 
>> amdgpu_device *adev,
>>                           err_data->err_addr_cnt);
>>               amdgpu_ras_save_bad_pages(adev);
>> -            if (adev->smu.ppt_funcs && 
>> adev->smu.ppt_funcs->send_hbm_bad_pages_num)
>> -                
>> adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, 
>> con->eeprom_control.ras_num_recs);
>> +            amdgpu_dpm_send_hbm_bad_pages_num(adev, 
>> con->eeprom_control.ras_num_recs);
>>           }
>>           amdgpu_ras_reset_gpu(adev);
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c 
>> b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>> index deae12dc777d..329a4c89f1e6 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
>> @@ -222,7 +222,7 @@ void 
>> kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
>>       len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
>>                  KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
>> -               atomic64_read(&dev->adev->smu.throttle_int_counter));
>> +               amdgpu_dpm_get_thermal_throttling_counter(dev->adev));
>>       add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,    
>> fifo_in, len);
>>   }
>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h 
>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>> index 5c0867ebcfce..2e295facd086 100644
>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>> @@ -26,6 +26,10 @@
>>   extern const struct amdgpu_ip_block_version pp_smu_ip_block;
>> +enum smu_event_type {
>> +    SMU_EVENT_RESET_COMPLETE = 0,
>> +};
>> +
>>   struct amd_vce_state {
>>       /* vce clocks */
>>       u32 evclk;
>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c 
>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>> index 08362d506534..9b332c8a0079 100644
>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>> @@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct 
>> amdgpu_device *adev, uint32_t *smu_versio
>>       return 0;
>>   }
>> +
>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable)
>> +{
>> +    return smu_set_light_sbr(&adev->smu, enable);
>> +}
>> +
>> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, 
>> uint32_t size)
>> +{
>> +    int ret = 0;
>> +
>> +    if (adev->smu.ppt_funcs && 
>> adev->smu.ppt_funcs->send_hbm_bad_pages_num)
>> +        ret = adev->smu.ppt_funcs->send_hbm_bad_pages_num(&adev->smu, 
>> size);
>> +
>> +    return ret;
>> +}
>> +
>> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
>> +                  enum pp_clock_type type,
>> +                  uint32_t *min,
>> +                  uint32_t *max)
>> +{
>> +    if (!is_support_sw_smu(adev))
>> +        return -EOPNOTSUPP;
>> +
>> +    switch (type) {
>> +    case PP_SCLK:
>> +        return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK, min, max);
>> +    default:
>> +        return -EINVAL;
>> +    }
>> +}
>> +
>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>> +                   enum pp_clock_type type,
>> +                   uint32_t min,
>> +                   uint32_t max)
>> +{
>> +    if (!is_support_sw_smu(adev))
>> +        return -EOPNOTSUPP;
>> +
>> +    switch (type) {
>> +    case PP_SCLK:
>> +        return smu_set_soft_freq_range(&adev->smu, SMU_SCLK, min, max);
>> +    default:
>> +        return -EINVAL;
>> +    }
>> +}
>> +
>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
>> +                  enum smu_event_type event,
>> +                  uint64_t event_arg)
>> +{
>> +    if (!is_support_sw_smu(adev))
>> +        return -EOPNOTSUPP;
>> +
>> +    return smu_wait_for_event(&adev->smu, event, event_arg);
>> +}
>> +
>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t 
>> *value)
>> +{
>> +    if (!is_support_sw_smu(adev))
>> +        return -EOPNOTSUPP;
>> +
>> +    return smu_get_status_gfxoff(&adev->smu, value);
>> +}
>> +
>> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct 
>> amdgpu_device *adev)
>> +{
>> +    return atomic64_read(&adev->smu.throttle_int_counter);
>> +}
>> +
>> +/* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
>> + * @adev: amdgpu_device pointer
>> + * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2 
>> -sGpuChangeState_D3Entry)
>> + *
>> + */
>> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
>> +                 enum gfx_change_state state)
>> +{
>> +    mutex_lock(&adev->pm.mutex);
>> +    if (adev->powerplay.pp_funcs &&
>> +        adev->powerplay.pp_funcs->gfx_state_change_set)
>> +        ((adev)->powerplay.pp_funcs->gfx_state_change_set(
>> +            (adev)->powerplay.pp_handle, state));
>> +    mutex_unlock(&adev->pm.mutex);
>> +}
>> +
>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>> +                void *umc_ecc)
>> +{
>> +    if (!is_support_sw_smu(adev))
>> +        return -EOPNOTSUPP;
>> +
> 
> In general, I don't think we need to keep this check everywhere to make 
> amdgpu_dpm_* backwards compatible.  The usage is also inconsistent. For 
> ex: amdgpu_dpm_get_thermal_throttling_counter doesn't have any 
> is_support_sw_smu check whereas amdgpu_dpm_get_ecc_info() has it. There 
> is no reason to keep adding is_support_sw_smu() check for every new 
> public API. For sure, they are not going to work with powerplay subsystem.
> 
> I would rather prefer to leave old things and create amdgpu_smu_* for 
> anything which is supported only in smu subsystem. It's easier to read 
> from code perspective also - separate the ones which is supported by smu 
> component and not supported in older powerplay components.
> 
> Only for the common ones that are supported in powerplay and smu, keep 
> amdgpu_dpm_*, for others preference would be to keep amdgpu_smu_*.
> 

To add to the previous point - many of the new services offered by swsmu 
(ex: reset, i2c transfer, hbm ecc info etc.) are not even related to 
device power management, so it doesn't make sense to wrap it as dpm_ 
services.

Thanks,
Lijo

> Thanks,
> Lijo
> 
>> +    return smu_get_ecc_info(&adev->smu, umc_ecc);
>> +}
>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h 
>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>> index 16e3f72d31b9..7289d379a9fb 100644
>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>> @@ -23,6 +23,12 @@
>>   #ifndef __AMDGPU_DPM_H__
>>   #define __AMDGPU_DPM_H__
>> +/* Argument for PPSMC_MSG_GpuChangeState */
>> +enum gfx_change_state {
>> +    sGpuChangeState_D0Entry = 1,
>> +    sGpuChangeState_D3Entry,
>> +};
>> +
>>   enum amdgpu_int_thermal_type {
>>       THERMAL_TYPE_NONE,
>>       THERMAL_TYPE_EXTERNAL,
>> @@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct amdgpu_device 
>> *adev, bool enable);
>>   void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
>>   void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>   int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t 
>> *smu_version);
>> -
>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool enable);
>> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, 
>> uint32_t size);
>> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
>> +                       enum pp_clock_type type,
>> +                       uint32_t *min,
>> +                       uint32_t *max);
>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
>> +                        enum pp_clock_type type,
>> +                        uint32_t min,
>> +                        uint32_t max);
>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum 
>> smu_event_type event,
>> +               uint64_t event_arg);
>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev, uint32_t 
>> *value);
>> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct 
>> amdgpu_device *adev);
>> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
>> +                 enum gfx_change_state state);
>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>> +                void *umc_ecc);
>>   #endif
>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h 
>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>> index f738f7dc20c9..29791bb21fba 100644
>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>> @@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
>>       uint32_t clk_dependency;
>>   };
>> -enum smu_event_type {
>> -
>> -    SMU_EVENT_RESET_COMPLETE = 0,
>> -};
>> -
>>   #define SMU_TABLE_INIT(tables, table_id, s, a, d)    \
>>       do {                        \
>>           tables[table_id].size = s;        \
>> @@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context *smu);
>>   int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value);
>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value);
>>   int smu_set_light_sbr(struct smu_context *smu, bool enable);
>> -int smu_wait_for_event(struct amdgpu_device *adev, enum 
>> smu_event_type event,
>> +int smu_wait_for_event(struct smu_context *smu, enum smu_event_type 
>> event,
>>                  uint64_t event_arg);
>>   int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
>>   int smu_stb_collect_info(struct smu_context *smu, void *buff, 
>> uint32_t size);
>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c 
>> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>> index 5839918cb574..ef7d0e377965 100644
>> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>> @@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void 
>> *handle,
>>       return ret;
>>   }
>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t *value)
>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
>>   {
>> -    int ret = 0;
>> -    struct smu_context *smu = &adev->smu;
>> +    if (!smu->ppt_funcs->get_gfx_off_status)
>> +        return -EINVAL;
>> -    if (is_support_sw_smu(adev) && smu->ppt_funcs->get_gfx_off_status)
>> -        *value = smu_get_gfx_off_status(smu);
>> -    else
>> -        ret = -EINVAL;
>> +    *value = smu_get_gfx_off_status(smu);
>> -    return ret;
>> +    return 0;
>>   }
>>   int smu_set_soft_freq_range(struct smu_context *smu,
>> @@ -3167,11 +3164,10 @@ static const struct amd_pm_funcs 
>> swsmu_pm_funcs = {
>>       .get_smu_prv_buf_details = smu_get_prv_buffer_details,
>>   };
>> -int smu_wait_for_event(struct amdgpu_device *adev, enum 
>> smu_event_type event,
>> +int smu_wait_for_event(struct smu_context *smu, enum smu_event_type 
>> event,
>>                  uint64_t event_arg)
>>   {
>>       int ret = -EINVAL;
>> -    struct smu_context *smu = &adev->smu;
>>       if (smu->ppt_funcs->wait_for_event) {
>>           mutex_lock(&smu->mutex);
>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task()
  2021-11-30 13:48   ` Lazar, Lijo
@ 2021-12-01  3:50     ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  3:50 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Tuesday, November 30, 2021 9:48 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 11/17] drm/amd/pm: correct the usage for
> amdgpu_dpm_dispatch_task()
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > We should avoid having multi-function APIs. It should be up to the
> > caller to determine when or whether to call amdgpu_dpm_dispatch_task().
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: I78ec4eb8ceb6e526a4734113d213d15a5fbaa8a4
> > ---
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c | 18 ++----------------
> >   drivers/gpu/drm/amd/pm/amdgpu_pm.c  | 26
> ++++++++++++++++++++++++--
> >   2 files changed, 26 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index c6299e406848..8f0ae58f4292 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -558,8 +558,6 @@ void amdgpu_dpm_set_power_state(struct
> amdgpu_device *adev,
> >   				enum amd_pm_state_type state)
> >   {
> >   	adev->pm.dpm.user_state = state;
> > -
> > -	amdgpu_dpm_dispatch_task(adev,
> AMD_PP_TASK_ENABLE_USER_STATE, &state);
> >   }
> >
> >   enum amd_dpm_forced_level
> amdgpu_dpm_get_performance_level(struct
> > amdgpu_device *adev) @@ -727,13 +725,7 @@ int
> amdgpu_dpm_set_sclk_od(struct amdgpu_device *adev, uint32_t value)
> >   	if (!pp_funcs->set_sclk_od)
> >   		return -EOPNOTSUPP;
> >
> > -	pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
> > -
> > -	amdgpu_dpm_dispatch_task(adev,
> > -				 AMD_PP_TASK_READJUST_POWER_STATE,
> > -				 NULL);
> > -
> > -	return 0;
> > +	return pp_funcs->set_sclk_od(adev->powerplay.pp_handle, value);
> >   }
> >
> >   int amdgpu_dpm_get_mclk_od(struct amdgpu_device *adev) @@ -
> 753,13
> > +745,7 @@ int amdgpu_dpm_set_mclk_od(struct amdgpu_device *adev,
> uint32_t value)
> >   	if (!pp_funcs->set_mclk_od)
> >   		return -EOPNOTSUPP;
> >
> > -	pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
> > -
> > -	amdgpu_dpm_dispatch_task(adev,
> > -				 AMD_PP_TASK_READJUST_POWER_STATE,
> > -				 NULL);
> > -
> > -	return 0;
> > +	return pp_funcs->set_mclk_od(adev->powerplay.pp_handle, value);
> >   }
> >
> >   int amdgpu_dpm_get_power_profile_mode(struct amdgpu_device
> *adev,
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> > index fa2f4e11e94e..89e1134d660f 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> > @@ -187,6 +187,10 @@ static ssize_t
> amdgpu_set_power_dpm_state(struct
> > device *dev,
> >
> >   	amdgpu_dpm_set_power_state(adev, state);
> >
> > +	amdgpu_dpm_dispatch_task(adev,
> > +				 AMD_PP_TASK_ENABLE_USER_STATE,
> > +				 &state);
> > +
> 
> This is just the opposite of what has been done so far. The idea is to keep the
> logic inside dpm_* calls and not to keep the logic in amdgpu_pm. This does
> the reverse. I guess this patch can be dropped.
[Quan, Evan] The situation here is 
1. in some cases the amdgpu_dpm_dispatch_task() is included/integrated. E.g. amdgpu_dpm_set_mclk_od() amdgpu_dpm_set_sclk_od
2. in other cases the amdgpu_dpm_dispatch_task() is called separately . E.g. by amdgpu_set_pp_force_state() and amdgpu_set_pp_od_clk_voltage() from amdgpu_pm.c 
They will make the thing that adds a unified lock protection on those amdgpu_dpm_xxx() APIs tricky. To resolve that, we either
1. separate the amdgpu_dpm_dispatch_task() from those APIs(amdgpu_dpm_set_mclk_od() amdgpu_dpm_set_sclk_od())
2. try to get amdgpu_dpm_dispatch_task() included also in amdgpu_set_pp_force_state() and amdgpu_set_pp_od_clk_voltage()
After some considerations, I believe 1 is the more proper way. As the current implementation of amdgpu_dpm_set_mclk_od() really combines two logics separately things together.
The amdgpu_dpm_dispatch_task() should be splitted out.

BR
Evan
> 
> Thanks,
> Lijo
> 
> >   	pm_runtime_mark_last_busy(ddev->dev);
> >   	pm_runtime_put_autosuspend(ddev->dev);
> >
> > @@ -1278,7 +1282,16 @@ static ssize_t amdgpu_set_pp_sclk_od(struct
> device *dev,
> >   		return ret;
> >   	}
> >
> > -	amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> > +	ret = amdgpu_dpm_set_sclk_od(adev, (uint32_t)value);
> > +	if (ret) {
> > +		pm_runtime_mark_last_busy(ddev->dev);
> > +		pm_runtime_put_autosuspend(ddev->dev);
> > +		return ret;
> > +	}
> > +
> > +	amdgpu_dpm_dispatch_task(adev,
> > +				 AMD_PP_TASK_READJUST_POWER_STATE,
> > +				 NULL);
> >
> >   	pm_runtime_mark_last_busy(ddev->dev);
> >   	pm_runtime_put_autosuspend(ddev->dev);
> > @@ -1340,7 +1353,16 @@ static ssize_t amdgpu_set_pp_mclk_od(struct
> device *dev,
> >   		return ret;
> >   	}
> >
> > -	amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> > +	ret = amdgpu_dpm_set_mclk_od(adev, (uint32_t)value);
> > +	if (ret) {
> > +		pm_runtime_mark_last_busy(ddev->dev);
> > +		pm_runtime_put_autosuspend(ddev->dev);
> > +		return ret;
> > +	}
> > +
> > +	amdgpu_dpm_dispatch_task(adev,
> > +				 AMD_PP_TASK_READJUST_POWER_STATE,
> > +				 NULL);
> >
> >   	pm_runtime_mark_last_busy(ddev->dev);
> >   	pm_runtime_put_autosuspend(ddev->dev);
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-12-01  3:13     ` Quan, Evan
@ 2021-12-01  4:19       ` Lazar, Lijo
  2021-12-01  7:17         ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-12-01  4:19 UTC (permalink / raw)
  To: Quan, Evan, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian



On 12/1/2021 8:43 AM, Quan, Evan wrote:
> [AMD Official Use Only]
> 
> 
> 
>> -----Original Message-----
>> From: Lazar, Lijo <Lijo.Lazar@amd.com>
>> Sent: Tuesday, November 30, 2021 9:21 PM
>> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
>> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
>> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those
>> APIs used only by legacy ASICs(si/kv)
>>
>>
>>
>> On 11/30/2021 1:12 PM, Evan Quan wrote:
>>> Those APIs are used only by legacy ASICs(si/kv). They cannot be
>>> shared by other ASICs. So, we create a new holder for them.
>>>
>>> Signed-off-by: Evan Quan <evan.quan@amd.com>
>>> Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
>>> --
>>> v1->v2:
>>>     - move other APIs used by si/kv in amdgpu_atombios.c to the new
>>>       holder also(Alex)
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
>>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
>>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
>>>    drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
>>>    drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
>>>    drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
>>>    drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453
>> +++++++++++++++++
>>>    drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
>>>    drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
>>>    10 files changed, 1534 insertions(+), 1470 deletions(-)
>>>    create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>>    create mode 100644 drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>> index 12a6b1c99c93..f2e447212e62 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>> @@ -1083,427 +1083,6 @@ int
>> amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>>>    	return 0;
>>>    }
>>>
>>> -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
>> *adev,
>>> -					    u32 clock,
>>> -					    bool strobe_mode,
>>> -					    struct atom_mpll_param
>> *mpll_param)
>>> -{
>>> -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
>>> -	int index = GetIndexIntoMasterTable(COMMAND,
>> ComputeMemoryClockParam);
>>> -	u8 frev, crev;
>>> -
>>> -	memset(&args, 0, sizeof(args));
>>> -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
>>> -
>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
>>> mode_info.atom_context, index, &frev, &crev))
>>> -		return -EINVAL;
>>> -
>>> -	switch (frev) {
>>> -	case 2:
>>> -		switch (crev) {
>>> -		case 1:
>>> -			/* SI */
>>> -			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
>>> -			args.ucInputFlag = 0;
>>> -			if (strobe_mode)
>>> -				args.ucInputFlag |=
>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
>>> -
>>> -			amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> -
>>> -			mpll_param->clkfrac =
>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
>>> -			mpll_param->clkf =
>> le16_to_cpu(args.ulFbDiv.usFbDiv);
>>> -			mpll_param->post_div = args.ucPostDiv;
>>> -			mpll_param->dll_speed = args.ucDllSpeed;
>>> -			mpll_param->bwcntl = args.ucBWCntl;
>>> -			mpll_param->vco_mode =
>>> -				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
>>> -			mpll_param->yclk_sel =
>>> -				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
>>> -			mpll_param->qdr =
>>> -				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
>>> -			mpll_param->half_rate =
>>> -				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
>>> -			break;
>>> -		default:
>>> -			return -EINVAL;
>>> -		}
>>> -		break;
>>> -	default:
>>> -		return -EINVAL;
>>> -	}
>>> -	return 0;
>>> -}
>>> -
>>> -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device
>> *adev,
>>> -					     u32 eng_clock, u32 mem_clock)
>>> -{
>>> -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
>>> -	int index = GetIndexIntoMasterTable(COMMAND,
>> DynamicMemorySettings);
>>> -	u32 tmp;
>>> -
>>> -	memset(&args, 0, sizeof(args));
>>> -
>>> -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
>>> -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
>>> -
>>> -	args.ulTargetEngineClock = cpu_to_le32(tmp);
>>> -	if (mem_clock)
>>> -		args.sReserved.ulClock = cpu_to_le32(mem_clock &
>> SET_CLOCK_FREQ_MASK);
>>> -
>>> -	amdgpu_atom_execute_table(adev->mode_info.atom_context,
>> index, (uint32_t *)&args);
>>> -}
>>> -
>>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>> *adev,
>>> -					  u16 *vddc, u16 *vddci, u16 *mvdd)
>>> -{
>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
>>> -	u8 frev, crev;
>>> -	u16 data_offset;
>>> -	union firmware_info *firmware_info;
>>> -
>>> -	*vddc = 0;
>>> -	*vddci = 0;
>>> -	*mvdd = 0;
>>> -
>>> -	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> -				   &frev, &crev, &data_offset)) {
>>> -		firmware_info =
>>> -			(union firmware_info *)(mode_info->atom_context-
>>> bios +
>>> -						data_offset);
>>> -		*vddc = le16_to_cpu(firmware_info-
>>> info_14.usBootUpVDDCVoltage);
>>> -		if ((frev == 2) && (crev >= 2)) {
>>> -			*vddci = le16_to_cpu(firmware_info-
>>> info_22.usBootUpVDDCIVoltage);
>>> -			*mvdd = le16_to_cpu(firmware_info-
>>> info_22.usBootUpMVDDCVoltage);
>>> -		}
>>> -	}
>>> -}
>>> -
>>> -union set_voltage {
>>> -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
>>> -	struct _SET_VOLTAGE_PARAMETERS v1;
>>> -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
>>> -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
>>> -};
>>> -
>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>> voltage_type,
>>> -			     u16 voltage_id, u16 *voltage)
>>> -{
>>> -	union set_voltage args;
>>> -	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
>>> -	u8 frev, crev;
>>> -
>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
>>> mode_info.atom_context, index, &frev, &crev))
>>> -		return -EINVAL;
>>> -
>>> -	switch (crev) {
>>> -	case 1:
>>> -		return -EINVAL;
>>> -	case 2:
>>> -		args.v2.ucVoltageType =
>> SET_VOLTAGE_GET_MAX_VOLTAGE;
>>> -		args.v2.ucVoltageMode = 0;
>>> -		args.v2.usVoltageLevel = 0;
>>> -
>>> -		amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> -
>>> -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
>>> -		break;
>>> -	case 3:
>>> -		args.v3.ucVoltageType = voltage_type;
>>> -		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
>>> -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
>>> -
>>> -		amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> -
>>> -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
>>> -		break;
>>> -	default:
>>> -		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
>>> -		return -EINVAL;
>>> -	}
>>> -
>>> -	return 0;
>>> -}
>>> -
>>> -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>> amdgpu_device *adev,
>>> -						      u16 *voltage,
>>> -						      u16 leakage_idx)
>>> -{
>>> -	return amdgpu_atombios_get_max_vddc(adev,
>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
>>> -}
>>> -
>>> -union voltage_object_info {
>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
>>> -};
>>> -
>>> -union voltage_object {
>>> -	struct _ATOM_VOLTAGE_OBJECT v1;
>>> -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
>>> -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
>>> -};
>>> -
>>> -
>>> -static ATOM_VOLTAGE_OBJECT_V3
>> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
>> NFO_V3_1 *v3,
>>> -									u8
>> voltage_type, u8 voltage_mode)
>>> -{
>>> -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
>>> -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
>> asVoltageObj[0]);
>>> -	u8 *start = (u8 *)v3;
>>> -
>>> -	while (offset < size) {
>>> -		ATOM_VOLTAGE_OBJECT_V3 *vo =
>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
>>> -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
>> voltage_type) &&
>>> -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
>> voltage_mode))
>>> -			return vo;
>>> -		offset += le16_to_cpu(vo-
>>> asGpioVoltageObj.sHeader.usSize);
>>> -	}
>>> -	return NULL;
>>> -}
>>> -
>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>> -			      u8 voltage_type,
>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
>>> -{
>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> -	u8 frev, crev;
>>> -	u16 data_offset, size;
>>> -	union voltage_object_info *voltage_info;
>>> -	union voltage_object *voltage_object = NULL;
>>> -
>>> -	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> -				   &frev, &crev, &data_offset)) {
>>> -		voltage_info = (union voltage_object_info *)
>>> -			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> -
>>> -		switch (frev) {
>>> -		case 3:
>>> -			switch (crev) {
>>> -			case 1:
>>> -				voltage_object = (union voltage_object *)
>>> -
>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> -
>> voltage_type,
>>> -
>> VOLTAGE_OBJ_SVID2);
>>> -				if (voltage_object) {
>>> -					*svd_gpio_id = voltage_object-
>>> v3.asSVID2Obj.ucSVDGpioId;
>>> -					*svc_gpio_id = voltage_object-
>>> v3.asSVID2Obj.ucSVCGpioId;
>>> -				} else {
>>> -					return -EINVAL;
>>> -				}
>>> -				break;
>>> -			default:
>>> -				DRM_ERROR("unknown voltage object
>> table\n");
>>> -				return -EINVAL;
>>> -			}
>>> -			break;
>>> -		default:
>>> -			DRM_ERROR("unknown voltage object table\n");
>>> -			return -EINVAL;
>>> -		}
>>> -
>>> -	}
>>> -	return 0;
>>> -}
>>> -
>>> -bool
>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>> -				u8 voltage_type, u8 voltage_mode)
>>> -{
>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> -	u8 frev, crev;
>>> -	u16 data_offset, size;
>>> -	union voltage_object_info *voltage_info;
>>> -
>>> -	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> -				   &frev, &crev, &data_offset)) {
>>> -		voltage_info = (union voltage_object_info *)
>>> -			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> -
>>> -		switch (frev) {
>>> -		case 3:
>>> -			switch (crev) {
>>> -			case 1:
>>> -				if
>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> -
>> voltage_type, voltage_mode))
>>> -					return true;
>>> -				break;
>>> -			default:
>>> -				DRM_ERROR("unknown voltage object
>> table\n");
>>> -				return false;
>>> -			}
>>> -			break;
>>> -		default:
>>> -			DRM_ERROR("unknown voltage object table\n");
>>> -			return false;
>>> -		}
>>> -
>>> -	}
>>> -	return false;
>>> -}
>>> -
>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>> -				      u8 voltage_type, u8 voltage_mode,
>>> -				      struct atom_voltage_table *voltage_table)
>>> -{
>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> -	u8 frev, crev;
>>> -	u16 data_offset, size;
>>> -	int i;
>>> -	union voltage_object_info *voltage_info;
>>> -	union voltage_object *voltage_object = NULL;
>>> -
>>> -	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> -				   &frev, &crev, &data_offset)) {
>>> -		voltage_info = (union voltage_object_info *)
>>> -			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> -
>>> -		switch (frev) {
>>> -		case 3:
>>> -			switch (crev) {
>>> -			case 1:
>>> -				voltage_object = (union voltage_object *)
>>> -
>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> -
>> voltage_type, voltage_mode);
>>> -				if (voltage_object) {
>>> -					ATOM_GPIO_VOLTAGE_OBJECT_V3
>> *gpio =
>>> -						&voltage_object-
>>> v3.asGpioVoltageObj;
>>> -					VOLTAGE_LUT_ENTRY_V2 *lut;
>>> -					if (gpio->ucGpioEntryNum >
>> MAX_VOLTAGE_ENTRIES)
>>> -						return -EINVAL;
>>> -					lut = &gpio->asVolGpioLut[0];
>>> -					for (i = 0; i < gpio->ucGpioEntryNum;
>> i++) {
>>> -						voltage_table-
>>> entries[i].value =
>>> -							le16_to_cpu(lut-
>>> usVoltageValue);
>>> -						voltage_table-
>>> entries[i].smio_low =
>>> -							le32_to_cpu(lut-
>>> ulVoltageId);
>>> -						lut =
>> (VOLTAGE_LUT_ENTRY_V2 *)
>>> -							((u8 *)lut +
>> sizeof(VOLTAGE_LUT_ENTRY_V2));
>>> -					}
>>> -					voltage_table->mask_low =
>> le32_to_cpu(gpio->ulGpioMaskVal);
>>> -					voltage_table->count = gpio-
>>> ucGpioEntryNum;
>>> -					voltage_table->phase_delay = gpio-
>>> ucPhaseDelay;
>>> -					return 0;
>>> -				}
>>> -				break;
>>> -			default:
>>> -				DRM_ERROR("unknown voltage object
>> table\n");
>>> -				return -EINVAL;
>>> -			}
>>> -			break;
>>> -		default:
>>> -			DRM_ERROR("unknown voltage object table\n");
>>> -			return -EINVAL;
>>> -		}
>>> -	}
>>> -	return -EINVAL;
>>> -}
>>> -
>>> -union vram_info {
>>> -	struct _ATOM_VRAM_INFO_V3 v1_3;
>>> -	struct _ATOM_VRAM_INFO_V4 v1_4;
>>> -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
>>> -};
>>> -
>>> -#define MEM_ID_MASK           0xff000000
>>> -#define MEM_ID_SHIFT          24
>>> -#define CLOCK_RANGE_MASK      0x00ffffff
>>> -#define CLOCK_RANGE_SHIFT     0
>>> -#define LOW_NIBBLE_MASK       0xf
>>> -#define DATA_EQU_PREV         0
>>> -#define DATA_FROM_TABLE       4
>>> -
>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>> -				      u8 module_index,
>>> -				      struct atom_mc_reg_table *reg_table)
>>> -{
>>> -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
>>> -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
>>> -	u32 i = 0, j;
>>> -	u16 data_offset, size;
>>> -	union vram_info *vram_info;
>>> -
>>> -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
>>> -
>>> -	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> -				   &frev, &crev, &data_offset)) {
>>> -		vram_info = (union vram_info *)
>>> -			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> -		switch (frev) {
>>> -		case 1:
>>> -			DRM_ERROR("old table version %d, %d\n", frev,
>> crev);
>>> -			return -EINVAL;
>>> -		case 2:
>>> -			switch (crev) {
>>> -			case 1:
>>> -				if (module_index < vram_info-
>>> v2_1.ucNumOfVRAMModule) {
>>> -					ATOM_INIT_REG_BLOCK *reg_block
>> =
>>> -						(ATOM_INIT_REG_BLOCK *)
>>> -						((u8 *)vram_info +
>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
>>> -
>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
>>> -
>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>> -						((u8 *)reg_block + (2 *
>> sizeof(u16)) +
>>> -						 le16_to_cpu(reg_block-
>>> usRegIndexTblSize));
>>> -					ATOM_INIT_REG_INDEX_FORMAT
>> *format = &reg_block->asRegIndexBuf[0];
>>> -					num_entries =
>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
>>> -
>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
>>> -					if (num_entries >
>> VBIOS_MC_REGISTER_ARRAY_SIZE)
>>> -						return -EINVAL;
>>> -					while (i < num_entries) {
>>> -						if (format-
>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
>>> -							break;
>>> -						reg_table-
>>> mc_reg_address[i].s1 =
>>> -
>> 	(u16)(le16_to_cpu(format->usRegIndex));
>>> -						reg_table-
>>> mc_reg_address[i].pre_reg_data =
>>> -							(u8)(format-
>>> ucPreRegDataLength);
>>> -						i++;
>>> -						format =
>> (ATOM_INIT_REG_INDEX_FORMAT *)
>>> -							((u8 *)format +
>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
>>> -					}
>>> -					reg_table->last = i;
>>> -					while ((le32_to_cpu(*(u32
>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
>>> -					       (num_ranges <
>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
>>> -						t_mem_id =
>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
>>> -								>>
>> MEM_ID_SHIFT);
>>> -						if (module_index ==
>> t_mem_id) {
>>> -							reg_table-
>>> mc_reg_table_entry[num_ranges].mclk_max =
>>> -
>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
>>> -								      >>
>> CLOCK_RANGE_SHIFT);
>>> -							for (i = 0, j = 1; i <
>> reg_table->last; i++) {
>>> -								if ((reg_table-
>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>> DATA_FROM_TABLE) {
>>> -
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>> -
>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
>>> -									j++;
>>> -								} else if
>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>> DATA_EQU_PREV) {
>>> -
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>> -
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
>>> -								}
>>> -							}
>>> -							num_ranges++;
>>> -						}
>>> -						reg_data =
>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>> -							((u8 *)reg_data +
>> le16_to_cpu(reg_block->usRegDataBlkSize));
>>> -					}
>>> -					if (le32_to_cpu(*(u32 *)reg_data) !=
>> END_OF_REG_DATA_BLOCK)
>>> -						return -EINVAL;
>>> -					reg_table->num_entries =
>> num_ranges;
>>> -				} else
>>> -					return -EINVAL;
>>> -				break;
>>> -			default:
>>> -				DRM_ERROR("Unknown table
>> version %d, %d\n", frev, crev);
>>> -				return -EINVAL;
>>> -			}
>>> -			break;
>>> -		default:
>>> -			DRM_ERROR("Unknown table version %d, %d\n",
>> frev, crev);
>>> -			return -EINVAL;
>>> -		}
>>> -		return 0;
>>> -	}
>>> -	return -EINVAL;
>>> -}
>>> -
>>>    bool amdgpu_atombios_has_gpu_virtualization_table(struct
>> amdgpu_device *adev)
>>>    {
>>>    	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>> index 27e74b1fc260..cb5649298dcb 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>> @@ -160,26 +160,6 @@ int amdgpu_atombios_get_clock_dividers(struct
>> amdgpu_device *adev,
>>>    				       bool strobe_mode,
>>>    				       struct atom_clock_dividers *dividers);
>>>
>>> -int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
>> *adev,
>>> -					    u32 clock,
>>> -					    bool strobe_mode,
>>> -					    struct atom_mpll_param
>> *mpll_param);
>>> -
>>> -void amdgpu_atombios_set_engine_dram_timings(struct amdgpu_device
>> *adev,
>>> -					     u32 eng_clock, u32 mem_clock);
>>> -
>>> -bool
>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>> -				u8 voltage_type, u8 voltage_mode);
>>> -
>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>> -				      u8 voltage_type, u8 voltage_mode,
>>> -				      struct atom_voltage_table
>> *voltage_table);
>>> -
>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>> -				      u8 module_index,
>>> -				      struct atom_mc_reg_table *reg_table);
>>> -
>>>    bool amdgpu_atombios_has_gpu_virtualization_table(struct
>> amdgpu_device *adev);
>>>
>>>    void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device *adev,
>> bool lock);
>>> @@ -190,21 +170,11 @@ void
>> amdgpu_atombios_scratch_regs_set_backlight_level(struct amdgpu_device
>> *adev
>>>    bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device
>> *adev);
>>>
>>>    void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool
>> to_le);
>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>> voltage_type,
>>> -			     u16 voltage_id, u16 *voltage);
>>> -int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>> amdgpu_device *adev,
>>> -						      u16 *voltage,
>>> -						      u16 leakage_idx);
>>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>> *adev,
>>> -					  u16 *vddc, u16 *vddci, u16 *mvdd);
>>>    int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>>>    				       u8 clock_type,
>>>    				       u32 clock,
>>>    				       bool strobe_mode,
>>>    				       struct atom_clock_dividers *dividers);
>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>> -			      u8 voltage_type,
>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
>>>
>>>    int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
>>>    				   uint32_t table,
>>
>>
>> Whether used in legacy or new logic, atombios table parsing/execution
>> should be kept as separate logic. These shouldn't be moved along with dpm.
> [Quan, Evan] Are you suggesting another place holder for those atombios APIs? Like legacy_atombios.c?

What I meant is no need to move them, keep it in the same file. We also 
have atomfirmware, splitting this and adding another legacy_atombios is 
not required.

>>
>>
>>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> index 2e295facd086..cdf724dcf832 100644
>>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> @@ -404,6 +404,7 @@ struct amd_pm_funcs {
>>>    	int (*get_dpm_clock_table)(void *handle,
>>>    				   struct dpm_clocks *clock_table);
>>>    	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t
>> *size);
>>> +	int (*change_power_state)(void *handle);
>>>    };
>>>
>>>    struct metrics_table_header {
>>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> index ecaf0081bc31..c6801d10cde6 100644
>>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> @@ -34,113 +34,9 @@
>>>
>>>    #define WIDTH_4K 3840
>>>
>>> -#define amdgpu_dpm_pre_set_power_state(adev) \
>>> -		((adev)->powerplay.pp_funcs-
>>> pre_set_power_state((adev)->powerplay.pp_handle))
>>> -
>>> -#define amdgpu_dpm_post_set_power_state(adev) \
>>> -		((adev)->powerplay.pp_funcs-
>>> post_set_power_state((adev)->powerplay.pp_handle))
>>> -
>>> -#define amdgpu_dpm_display_configuration_changed(adev) \
>>> -		((adev)->powerplay.pp_funcs-
>>> display_configuration_changed((adev)->powerplay.pp_handle))
>>> -
>>> -#define amdgpu_dpm_print_power_state(adev, ps) \
>>> -		((adev)->powerplay.pp_funcs->print_power_state((adev)-
>>> powerplay.pp_handle, (ps)))
>>> -
>>> -#define amdgpu_dpm_vblank_too_short(adev) \
>>> -		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
>>> powerplay.pp_handle))
>>> -
>>>    #define amdgpu_dpm_enable_bapm(adev, e) \
>>>    		((adev)->powerplay.pp_funcs->enable_bapm((adev)-
>>> powerplay.pp_handle, (e)))
>>>
>>> -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
>>> -		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
>>> powerplay.pp_handle, (cps), (rps), (equal)))
>>> -
>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
>>> -{
>>> -	const char *s;
>>> -
>>> -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
>>> -	default:
>>> -		s = "none";
>>> -		break;
>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
>>> -		s = "battery";
>>> -		break;
>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
>>> -		s = "balanced";
>>> -		break;
>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
>>> -		s = "performance";
>>> -		break;
>>> -	}
>>> -	printk("\tui class: %s\n", s);
>>> -	printk("\tinternal class:");
>>> -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
>>> -	    (class2 == 0))
>>> -		pr_cont(" none");
>>> -	else {
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
>>> -			pr_cont(" boot");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>> -			pr_cont(" thermal");
>>> -		if (class &
>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
>>> -			pr_cont(" limited_pwr");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
>>> -			pr_cont(" rest");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
>>> -			pr_cont(" forced");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>> -			pr_cont(" 3d_perf");
>>> -		if (class &
>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
>>> -			pr_cont(" ovrdrv");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
>>> -			pr_cont(" uvd");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
>>> -			pr_cont(" 3d_low");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>> -			pr_cont(" acpi");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>> -			pr_cont(" uvd_hd2");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>> -			pr_cont(" uvd_hd");
>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>> -			pr_cont(" uvd_sd");
>>> -		if (class2 &
>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
>>> -			pr_cont(" limited_pwr2");
>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>> -			pr_cont(" ulv");
>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
>>> -			pr_cont(" uvd_mvc");
>>> -	}
>>> -	pr_cont("\n");
>>> -}
>>> -
>>> -void amdgpu_dpm_print_cap_info(u32 caps)
>>> -{
>>> -	printk("\tcaps:");
>>> -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
>>> -		pr_cont(" single_disp");
>>> -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
>>> -		pr_cont(" video");
>>> -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
>>> -		pr_cont(" no_dc");
>>> -	pr_cont("\n");
>>> -}
>>> -
>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>> -				struct amdgpu_ps *rps)
>>> -{
>>> -	printk("\tstatus:");
>>> -	if (rps == adev->pm.dpm.current_ps)
>>> -		pr_cont(" c");
>>> -	if (rps == adev->pm.dpm.requested_ps)
>>> -		pr_cont(" r");
>>> -	if (rps == adev->pm.dpm.boot_ps)
>>> -		pr_cont(" b");
>>> -	pr_cont("\n");
>>> -}
>>> -
>>>    static void amdgpu_dpm_get_active_displays(struct amdgpu_device
>> *adev)
>>>    {
>>>    	struct drm_device *ddev = adev_to_drm(adev);
>>> @@ -161,7 +57,6 @@ static void amdgpu_dpm_get_active_displays(struct
>> amdgpu_device *adev)
>>>    	}
>>>    }
>>>
>>> -
>>>    u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
>>>    {
>>>    	struct drm_device *dev = adev_to_drm(adev);
>>> @@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct
>> amdgpu_device *adev)
>>>    	return vrefresh;
>>>    }
>>>
>>> -union power_info {
>>> -	struct _ATOM_POWERPLAY_INFO info;
>>> -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
>>> -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
>>> -};
>>> -
>>> -union fan_info {
>>> -	struct _ATOM_PPLIB_FANTABLE fan;
>>> -	struct _ATOM_PPLIB_FANTABLE2 fan2;
>>> -	struct _ATOM_PPLIB_FANTABLE3 fan3;
>>> -};
>>> -
>>> -static int amdgpu_parse_clk_voltage_dep_table(struct
>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
>>> -
>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
>>> -{
>>> -	u32 size = atom_table->ucNumEntries *
>>> -		sizeof(struct amdgpu_clock_voltage_dependency_entry);
>>> -	int i;
>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
>>> -
>>> -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
>>> -	if (!amdgpu_table->entries)
>>> -		return -ENOMEM;
>>> -
>>> -	entry = &atom_table->entries[0];
>>> -	for (i = 0; i < atom_table->ucNumEntries; i++) {
>>> -		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
>>> usClockLow) |
>>> -			(entry->ucClockHigh << 16);
>>> -		amdgpu_table->entries[i].v = le16_to_cpu(entry-
>>> usVoltage);
>>> -		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
>> *)
>>> -			((u8 *)entry +
>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
>>> -	}
>>> -	amdgpu_table->count = atom_table->ucNumEntries;
>>> -
>>> -	return 0;
>>> -}
>>> -
>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
>>> -{
>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> -	union power_info *power_info;
>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> -	u16 data_offset;
>>> -	u8 frev, crev;
>>> -
>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> -				   &frev, &crev, &data_offset))
>>> -		return -EINVAL;
>>> -	power_info = (union power_info *)(mode_info->atom_context-
>>> bios + data_offset);
>>> -
>>> -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
>>> pplib.ulPlatformCaps);
>>> -	adev->pm.dpm.backbias_response_time =
>> le16_to_cpu(power_info->pplib.usBackbiasTime);
>>> -	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
>>> pplib.usVoltageTime);
>>> -
>>> -	return 0;
>>> -}
>>> -
>>> -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
>>> -
>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
>>> -{
>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> -	union power_info *power_info;
>>> -	union fan_info *fan_info;
>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> -	u16 data_offset;
>>> -	u8 frev, crev;
>>> -	int ret, i;
>>> -
>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> -				   &frev, &crev, &data_offset))
>>> -		return -EINVAL;
>>> -	power_info = (union power_info *)(mode_info->atom_context-
>>> bios + data_offset);
>>> -
>>> -	/* fan table */
>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>> -		if (power_info->pplib3.usFanTableOffset) {
>>> -			fan_info = (union fan_info *)(mode_info-
>>> atom_context->bios + data_offset +
>>> -						      le16_to_cpu(power_info-
>>> pplib3.usFanTableOffset));
>>> -			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
>>> -			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
>>> fan.usTMin);
>>> -			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
>>> fan.usTMed);
>>> -			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
>>> fan.usTHigh);
>>> -			adev->pm.dpm.fan.pwm_min =
>> le16_to_cpu(fan_info->fan.usPWMMin);
>>> -			adev->pm.dpm.fan.pwm_med =
>> le16_to_cpu(fan_info->fan.usPWMMed);
>>> -			adev->pm.dpm.fan.pwm_high =
>> le16_to_cpu(fan_info->fan.usPWMHigh);
>>> -			if (fan_info->fan.ucFanTableFormat >= 2)
>>> -				adev->pm.dpm.fan.t_max =
>> le16_to_cpu(fan_info->fan2.usTMax);
>>> -			else
>>> -				adev->pm.dpm.fan.t_max = 10900;
>>> -			adev->pm.dpm.fan.cycle_delay = 100000;
>>> -			if (fan_info->fan.ucFanTableFormat >= 3) {
>>> -				adev->pm.dpm.fan.control_mode =
>> fan_info->fan3.ucFanControlMode;
>>> -				adev->pm.dpm.fan.default_max_fan_pwm
>> =
>>> -					le16_to_cpu(fan_info-
>>> fan3.usFanPWMMax);
>>> -				adev-
>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
>>> -				adev->pm.dpm.fan.fan_output_sensitivity =
>>> -					le16_to_cpu(fan_info-
>>> fan3.usFanOutputSensitivity);
>>> -			}
>>> -			adev->pm.dpm.fan.ucode_fan_control = true;
>>> -		}
>>> -	}
>>> -
>>> -	/* clock dependancy tables, shedding tables */
>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
>>> -		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
>>> -			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usVddcDependencyOnSCLKOffset));
>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
>>> -								 dep_table);
>>> -			if (ret) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return ret;
>>> -			}
>>> -		}
>>> -		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
>>> -			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usVddciDependencyOnMCLKOffset));
>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
>>> -								 dep_table);
>>> -			if (ret) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return ret;
>>> -			}
>>> -		}
>>> -		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
>>> -			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usVddcDependencyOnMCLKOffset));
>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
>>> -								 dep_table);
>>> -			if (ret) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return ret;
>>> -			}
>>> -		}
>>> -		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
>> {
>>> -			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usMvddDependencyOnMCLKOffset));
>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
>>> -								 dep_table);
>>> -			if (ret) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return ret;
>>> -			}
>>> -		}
>>> -		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
>>> -			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
>>> -				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usMaxClockVoltageOnDCOffset));
>>> -			if (clk_v->ucNumEntries) {
>>> -				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
>>> -					le16_to_cpu(clk_v-
>>> entries[0].usSclkLow) |
>>> -					(clk_v->entries[0].ucSclkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
>>> -					le16_to_cpu(clk_v-
>>> entries[0].usMclkLow) |
>>> -					(clk_v->entries[0].ucMclkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
>>> -					le16_to_cpu(clk_v-
>>> entries[0].usVddc);
>>> -				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
>>> -					le16_to_cpu(clk_v-
>>> entries[0].usVddci);
>>> -			}
>>> -		}
>>> -		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
>> {
>>> -			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
>>> -				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib4.usVddcPhaseShedLimitsTableOffset));
>>> -			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
>>> -
>>> -			adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
>>> -				kcalloc(psl->ucNumEntries,
>>> -					sizeof(struct
>> amdgpu_phase_shedding_limits_entry),
>>> -					GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -
>>> -			entry = &psl->entries[0];
>>> -			for (i = 0; i < psl->ucNumEntries; i++) {
>>> -				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
>>> -					le16_to_cpu(entry->usSclkLow) |
>> (entry->ucSclkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
>>> -					le16_to_cpu(entry->usMclkLow) |
>> (entry->ucMclkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
>>> -					le16_to_cpu(entry->usVoltage);
>>> -				entry =
>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
>>> -			}
>>> -			adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
>>> -				psl->ucNumEntries;
>>> -		}
>>> -	}
>>> -
>>> -	/* cac data */
>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
>>> -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
>>> pplib5.ulTDPLimit);
>>> -		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
>>> pplib5.ulNearTDPLimit);
>>> -		adev->pm.dpm.near_tdp_limit_adjusted = adev-
>>> pm.dpm.near_tdp_limit;
>>> -		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
>>> pplib5.usTDPODLimit);
>>> -		if (adev->pm.dpm.tdp_od_limit)
>>> -			adev->pm.dpm.power_control = true;
>>> -		else
>>> -			adev->pm.dpm.power_control = false;
>>> -		adev->pm.dpm.tdp_adjustment = 0;
>>> -		adev->pm.dpm.sq_ramping_threshold =
>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
>>> -		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
>>> pplib5.ulCACLeakage);
>>> -		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
>>> pplib5.usLoadLineSlope);
>>> -		if (power_info->pplib5.usCACLeakageTableOffset) {
>>> -			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
>>> -				(ATOM_PPLIB_CAC_Leakage_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(power_info-
>>> pplib5.usCACLeakageTableOffset));
>>> -			ATOM_PPLIB_CAC_Leakage_Record *entry;
>>> -			u32 size = cac_table->ucNumEntries * sizeof(struct
>> amdgpu_cac_leakage_table);
>>> -			adev->pm.dpm.dyn_state.cac_leakage_table.entries
>> = kzalloc(size, GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			entry = &cac_table->entries[0];
>>> -			for (i = 0; i < cac_table->ucNumEntries; i++) {
>>> -				if (adev->pm.dpm.platform_caps &
>> ATOM_PP_PLATFORM_CAP_EVV) {
>>> -					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
>>> -						le16_to_cpu(entry-
>>> usVddc1);
>>> -					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
>>> -						le16_to_cpu(entry-
>>> usVddc2);
>>> -					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
>>> -						le16_to_cpu(entry-
>>> usVddc3);
>>> -				} else {
>>> -					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
>>> -						le16_to_cpu(entry->usVddc);
>>> -					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
>>> -						le32_to_cpu(entry-
>>> ulLeakageValue);
>>> -				}
>>> -				entry = (ATOM_PPLIB_CAC_Leakage_Record
>> *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
>>> -			}
>>> -			adev->pm.dpm.dyn_state.cac_leakage_table.count
>> = cac_table->ucNumEntries;
>>> -		}
>>> -	}
>>> -
>>> -	/* ext tables */
>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>> -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
>> (ATOM_PPLIB_EXTENDEDHEADER *)
>>> -			(mode_info->atom_context->bios + data_offset +
>>> -			 le16_to_cpu(power_info-
>>> pplib3.usExtendendedHeaderOffset));
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
>>> -			ext_hdr->usVCETableOffset) {
>>> -			VCEClockInfoArray *array = (VCEClockInfoArray *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1);
>>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
>> *limits =
>>> -
>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1 +
>>> -				 1 + array->ucNumEntries *
>> sizeof(VCEClockInfo));
>>> -			ATOM_PPLIB_VCE_State_Table *states =
>>> -				(ATOM_PPLIB_VCE_State_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1 +
>>> -				 1 + (array->ucNumEntries * sizeof
>> (VCEClockInfo)) +
>>> -				 1 + (limits->numEntries *
>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
>>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
>> *entry;
>>> -			ATOM_PPLIB_VCE_State_Record *state_entry;
>>> -			VCEClockInfo *vce_clk;
>>> -			u32 size = limits->numEntries *
>>> -				sizeof(struct
>> amdgpu_vce_clock_voltage_dependency_entry);
>>> -			adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
>>> -				kzalloc(size, GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
>>> -				limits->numEntries;
>>> -			entry = &limits->entries[0];
>>> -			state_entry = &states->entries[0];
>>> -			for (i = 0; i < limits->numEntries; i++) {
>>> -				vce_clk = (VCEClockInfo *)
>>> -					((u8 *)&array->entries[0] +
>>> -					 (entry->ucVCEClockInfoIndex *
>> sizeof(VCEClockInfo)));
>>> -				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
>> =
>>> -					le16_to_cpu(vce_clk->usEVClkLow) |
>> (vce_clk->ucEVClkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
>> =
>>> -					le16_to_cpu(vce_clk->usECClkLow) |
>> (vce_clk->ucECClkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
>>> -					le16_to_cpu(entry->usVoltage);
>>> -				entry =
>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
>>> -			}
>>> -			adev->pm.dpm.num_of_vce_states =
>>> -					states->numEntries >
>> AMD_MAX_VCE_LEVELS ?
>>> -					AMD_MAX_VCE_LEVELS : states-
>>> numEntries;
>>> -			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
>> {
>>> -				vce_clk = (VCEClockInfo *)
>>> -					((u8 *)&array->entries[0] +
>>> -					 (state_entry->ucVCEClockInfoIndex
>> * sizeof(VCEClockInfo)));
>>> -				adev->pm.dpm.vce_states[i].evclk =
>>> -					le16_to_cpu(vce_clk->usEVClkLow) |
>> (vce_clk->ucEVClkHigh << 16);
>>> -				adev->pm.dpm.vce_states[i].ecclk =
>>> -					le16_to_cpu(vce_clk->usECClkLow) |
>> (vce_clk->ucECClkHigh << 16);
>>> -				adev->pm.dpm.vce_states[i].clk_idx =
>>> -					state_entry->ucClockInfoIndex &
>> 0x3f;
>>> -				adev->pm.dpm.vce_states[i].pstate =
>>> -					(state_entry->ucClockInfoIndex &
>> 0xc0) >> 6;
>>> -				state_entry =
>> (ATOM_PPLIB_VCE_State_Record *)
>>> -					((u8 *)state_entry +
>> sizeof(ATOM_PPLIB_VCE_State_Record));
>>> -			}
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
>>> -			ext_hdr->usUVDTableOffset) {
>>> -			UVDClockInfoArray *array = (UVDClockInfoArray *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>> 1);
>>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
>> *limits =
>>> -
>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>> 1 +
>>> -				 1 + (array->ucNumEntries * sizeof
>> (UVDClockInfo)));
>>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
>> *entry;
>>> -			u32 size = limits->numEntries *
>>> -				sizeof(struct
>> amdgpu_uvd_clock_voltage_dependency_entry);
>>> -			adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
>>> -				kzalloc(size, GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
>>> -				limits->numEntries;
>>> -			entry = &limits->entries[0];
>>> -			for (i = 0; i < limits->numEntries; i++) {
>>> -				UVDClockInfo *uvd_clk = (UVDClockInfo *)
>>> -					((u8 *)&array->entries[0] +
>>> -					 (entry->ucUVDClockInfoIndex *
>> sizeof(UVDClockInfo)));
>>> -				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
>>> -					le16_to_cpu(uvd_clk->usVClkLow) |
>> (uvd_clk->ucVClkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
>>> -					le16_to_cpu(uvd_clk->usDClkLow) |
>> (uvd_clk->ucDClkHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
>>> -					le16_to_cpu(entry->usVoltage);
>>> -				entry =
>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
>>> -			}
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
>>> -			ext_hdr->usSAMUTableOffset) {
>>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
>>> -				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
>> *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
>> + 1);
>>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
>>> -			u32 size = limits->numEntries *
>>> -				sizeof(struct
>> amdgpu_clock_voltage_dependency_entry);
>>> -			adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
>>> -				kzalloc(size, GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
>>> -				limits->numEntries;
>>> -			entry = &limits->entries[0];
>>> -			for (i = 0; i < limits->numEntries; i++) {
>>> -				adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
>>> -					le16_to_cpu(entry->usSAMClockLow)
>> | (entry->ucSAMClockHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
>>> -					le16_to_cpu(entry->usVoltage);
>>> -				entry =
>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
>>> -			}
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
>>> -		    ext_hdr->usPPMTableOffset) {
>>> -			ATOM_PPLIB_PPM_Table *ppm =
>> (ATOM_PPLIB_PPM_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usPPMTableOffset));
>>> -			adev->pm.dpm.dyn_state.ppm_table =
>>> -				kzalloc(sizeof(struct amdgpu_ppm_table),
>> GFP_KERNEL);
>>> -			if (!adev->pm.dpm.dyn_state.ppm_table) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			adev->pm.dpm.dyn_state.ppm_table->ppm_design
>> = ppm->ucPpmDesign;
>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>> cpu_core_number =
>>> -				le16_to_cpu(ppm->usCpuCoreNumber);
>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>> platform_tdp =
>>> -				le32_to_cpu(ppm->ulPlatformTDP);
>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>> small_ac_platform_tdp =
>>> -				le32_to_cpu(ppm->ulSmallACPlatformTDP);
>>> -			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
>> =
>>> -				le32_to_cpu(ppm->ulPlatformTDC);
>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>> small_ac_platform_tdc =
>>> -				le32_to_cpu(ppm->ulSmallACPlatformTDC);
>>> -			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
>>> -				le32_to_cpu(ppm->ulApuTDP);
>>> -			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
>>> -				le32_to_cpu(ppm->ulDGpuTDP);
>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>> dgpu_ulv_power =
>>> -				le32_to_cpu(ppm->ulDGpuUlvPower);
>>> -			adev->pm.dpm.dyn_state.ppm_table->tj_max =
>>> -				le32_to_cpu(ppm->ulTjmax);
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
>>> -			ext_hdr->usACPTableOffset) {
>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
>>> -				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
>> *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr->usACPTableOffset) +
>> 1);
>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
>>> -			u32 size = limits->numEntries *
>>> -				sizeof(struct
>> amdgpu_clock_voltage_dependency_entry);
>>> -			adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
>>> -				kzalloc(size, GFP_KERNEL);
>>> -			if (!adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
>>> -				limits->numEntries;
>>> -			entry = &limits->entries[0];
>>> -			for (i = 0; i < limits->numEntries; i++) {
>>> -				adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
>>> -					le16_to_cpu(entry->usACPClockLow)
>> | (entry->ucACPClockHigh << 16);
>>> -				adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
>>> -					le16_to_cpu(entry->usVoltage);
>>> -				entry =
>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
>>> -					((u8 *)entry +
>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
>>> -			}
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
>>> -			ext_hdr->usPowerTuneTableOffset) {
>>> -			u8 rev = *(u8 *)(mode_info->atom_context->bios +
>> data_offset +
>>> -					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> -			ATOM_PowerTune_Table *pt;
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table =
>>> -				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
>> GFP_KERNEL);
>>> -			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
>>> -
>> 	amdgpu_free_extended_power_table(adev);
>>> -				return -ENOMEM;
>>> -			}
>>> -			if (rev > 0) {
>>> -				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
>>> -					(mode_info->atom_context->bios +
>> data_offset +
>>> -					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
>>> maximum_power_delivery_limit =
>>> -					ppt->usMaximumPowerDeliveryLimit;
>>> -				pt = &ppt->power_tune_table;
>>> -			} else {
>>> -				ATOM_PPLIB_POWERTUNE_Table *ppt =
>> (ATOM_PPLIB_POWERTUNE_Table *)
>>> -					(mode_info->atom_context->bios +
>> data_offset +
>>> -					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
>>> maximum_power_delivery_limit = 255;
>>> -				pt = &ppt->power_tune_table;
>>> -			}
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
>> le16_to_cpu(pt->usTDP);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> configurable_tdp =
>>> -				le16_to_cpu(pt->usConfigurableTDP);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
>> le16_to_cpu(pt->usTDC);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> battery_power_limit =
>>> -				le16_to_cpu(pt->usBatteryPowerLimit);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> small_power_limit =
>>> -				le16_to_cpu(pt->usSmallPowerLimit);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> low_cac_leakage =
>>> -				le16_to_cpu(pt->usLowCACLeakage);
>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> high_cac_leakage =
>>> -				le16_to_cpu(pt->usHighCACLeakage);
>>> -		}
>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
>>> -				ext_hdr->usSclkVddgfxTableOffset) {
>>> -			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> -				(mode_info->atom_context->bios +
>> data_offset +
>>> -				 le16_to_cpu(ext_hdr-
>>> usSclkVddgfxTableOffset));
>>> -			ret = amdgpu_parse_clk_voltage_dep_table(
>>> -					&adev-
>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
>>> -					dep_table);
>>> -			if (ret) {
>>> -				kfree(adev-
>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
>>> -				return ret;
>>> -			}
>>> -		}
>>> -	}
>>> -
>>> -	return 0;
>>> -}
>>> -
>>> -void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
>>> -{
>>> -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
>>> pm.dpm.dyn_state;
>>> -
>>> -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
>>> -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
>>> -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
>>> -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
>>> -	kfree(dyn_state->cac_leakage_table.entries);
>>> -	kfree(dyn_state->phase_shedding_limits_table.entries);
>>> -	kfree(dyn_state->ppm_table);
>>> -	kfree(dyn_state->cac_tdp_table);
>>> -	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
>>> -	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
>>> -	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
>>> -	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
>>> -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
>>> -}
>>> -
>>> -static const char *pp_lib_thermal_controller_names[] = {
>>> -	"NONE",
>>> -	"lm63",
>>> -	"adm1032",
>>> -	"adm1030",
>>> -	"max6649",
>>> -	"lm64",
>>> -	"f75375",
>>> -	"RV6xx",
>>> -	"RV770",
>>> -	"adt7473",
>>> -	"NONE",
>>> -	"External GPIO",
>>> -	"Evergreen",
>>> -	"emc2103",
>>> -	"Sumo",
>>> -	"Northern Islands",
>>> -	"Southern Islands",
>>> -	"lm96163",
>>> -	"Sea Islands",
>>> -	"Kaveri/Kabini",
>>> -};
>>> -
>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
>>> -{
>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> -	ATOM_PPLIB_THERMALCONTROLLER *controller;
>>> -	struct amdgpu_i2c_bus_rec i2c_bus;
>>> -	u16 data_offset;
>>> -	u8 frev, crev;
>>> -
>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> -				   &frev, &crev, &data_offset))
>>> -		return;
>>> -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
>>> -		(mode_info->atom_context->bios + data_offset);
>>> -	controller = &power_table->sThermalController;
>>> -
>>> -	/* add the i2c bus for thermal/fan chip */
>>> -	if (controller->ucType > 0) {
>>> -		if (controller->ucFanParameters &
>> ATOM_PP_FANPARAMETERS_NOFAN)
>>> -			adev->pm.no_fan = true;
>>> -		adev->pm.fan_pulses_per_revolution =
>>> -			controller->ucFanParameters &
>> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
>> ASK;
>>> -		if (adev->pm.fan_pulses_per_revolution) {
>>> -			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
>>> -			adev->pm.fan_max_rpm = controller-
>>> ucFanMaxRPM;
>>> -		}
>>> -		if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_RV6XX;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_RV770) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_RV770;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EVERGREEN;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_SUMO) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_SUMO;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
>>> -			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
>>> -		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
>>> -			DRM_INFO("External GPIO thermal controller %s fan
>> control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EXTERNAL_GPIO;
>>> -		} else if (controller->ucType ==
>>> -
>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
>>> -			DRM_INFO("ADT7473 with internal thermal
>> controller %s fan control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
>>> -		} else if (controller->ucType ==
>>> -
>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
>>> -			DRM_INFO("EMC2103 with internal thermal
>> controller %s fan control\n",
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
>>> -		} else if (controller->ucType <
>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
>>> -			DRM_INFO("Possible %s thermal controller at
>> 0x%02x %s fan control\n",
>>> -
>> pp_lib_thermal_controller_names[controller->ucType],
>>> -				 controller->ucI2cAddress >> 1,
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EXTERNAL;
>>> -			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
>> controller->ucI2cLine);
>>> -			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
>> &i2c_bus);
>>> -			if (adev->pm.i2c_bus) {
>>> -				struct i2c_board_info info = { };
>>> -				const char *name =
>> pp_lib_thermal_controller_names[controller->ucType];
>>> -				info.addr = controller->ucI2cAddress >> 1;
>>> -				strlcpy(info.type, name, sizeof(info.type));
>>> -				i2c_new_client_device(&adev->pm.i2c_bus-
>>> adapter, &info);
>>> -			}
>>> -		} else {
>>> -			DRM_INFO("Unknown thermal controller type %d at
>> 0x%02x %s fan control\n",
>>> -				 controller->ucType,
>>> -				 controller->ucI2cAddress >> 1,
>>> -				 (controller->ucFanParameters &
>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> -		}
>>> -	}
>>> -}
>>> -
>>> -struct amd_vce_state*
>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx)
>>> -{
>>> -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -
>>> -	if (idx < adev->pm.dpm.num_of_vce_states)
>>> -		return &adev->pm.dpm.vce_states[idx];
>>> -
>>> -	return NULL;
>>> -}
>>> -
>>>    int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
>>>    {
>>>    	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>>> @@ -1243,211 +465,6 @@ void
>> amdgpu_dpm_thermal_work_handler(struct work_struct *work)
>>>    	amdgpu_pm_compute_clocks(adev);
>>>    }
>>>
>>> -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
>> amdgpu_device *adev,
>>> -						     enum
>> amd_pm_state_type dpm_state)
>>> -{
>>> -	int i;
>>> -	struct amdgpu_ps *ps;
>>> -	u32 ui_class;
>>> -	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
>>> -		true : false;
>>> -
>>> -	/* check if the vblank period is too short to adjust the mclk */
>>> -	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
>> {
>>> -		if (amdgpu_dpm_vblank_too_short(adev))
>>> -			single_display = false;
>>> -	}
>>> -
>>> -	/* certain older asics have a separare 3D performance state,
>>> -	 * so try that first if the user selected performance
>>> -	 */
>>> -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
>>> -	/* balanced states don't exist at the moment */
>>> -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> -
>>> -restart_search:
>>> -	/* Pick the best power state based on current conditions */
>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
>>> -		ps = &adev->pm.dpm.ps[i];
>>> -		ui_class = ps->class &
>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
>>> -		switch (dpm_state) {
>>> -		/* user states */
>>> -		case POWER_STATE_TYPE_BATTERY:
>>> -			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
>>> -				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> -					if (single_display)
>>> -						return ps;
>>> -				} else
>>> -					return ps;
>>> -			}
>>> -			break;
>>> -		case POWER_STATE_TYPE_BALANCED:
>>> -			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
>>> -				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> -					if (single_display)
>>> -						return ps;
>>> -				} else
>>> -					return ps;
>>> -			}
>>> -			break;
>>> -		case POWER_STATE_TYPE_PERFORMANCE:
>>> -			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
>>> -				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> -					if (single_display)
>>> -						return ps;
>>> -				} else
>>> -					return ps;
>>> -			}
>>> -			break;
>>> -		/* internal states */
>>> -		case POWER_STATE_TYPE_INTERNAL_UVD:
>>> -			if (adev->pm.dpm.uvd_ps)
>>> -				return adev->pm.dpm.uvd_ps;
>>> -			else
>>> -				break;
>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>> -			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>> -			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>> -			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>> -			if (ps->class2 &
>> ATOM_PPLIB_CLASSIFICATION2_MVC)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_BOOT:
>>> -			return adev->pm.dpm.boot_ps;
>>> -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>> -			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_ACPI:
>>> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_ULV:
>>> -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>> -				return ps;
>>> -			break;
>>> -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>> -			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>> -				return ps;
>>> -			break;
>>> -		default:
>>> -			break;
>>> -		}
>>> -	}
>>> -	/* use a fallback state if we didn't match */
>>> -	switch (dpm_state) {
>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
>>> -		goto restart_search;
>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>> -		if (adev->pm.dpm.uvd_ps) {
>>> -			return adev->pm.dpm.uvd_ps;
>>> -		} else {
>>> -			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> -			goto restart_search;
>>> -		}
>>> -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
>>> -		goto restart_search;
>>> -	case POWER_STATE_TYPE_INTERNAL_ACPI:
>>> -		dpm_state = POWER_STATE_TYPE_BATTERY;
>>> -		goto restart_search;
>>> -	case POWER_STATE_TYPE_BATTERY:
>>> -	case POWER_STATE_TYPE_BALANCED:
>>> -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> -		goto restart_search;
>>> -	default:
>>> -		break;
>>> -	}
>>> -
>>> -	return NULL;
>>> -}
>>> -
>>> -static void amdgpu_dpm_change_power_state_locked(struct
>> amdgpu_device *adev)
>>> -{
>>> -	struct amdgpu_ps *ps;
>>> -	enum amd_pm_state_type dpm_state;
>>> -	int ret;
>>> -	bool equal = false;
>>> -
>>> -	/* if dpm init failed */
>>> -	if (!adev->pm.dpm_enabled)
>>> -		return;
>>> -
>>> -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
>>> -		/* add other state override checks here */
>>> -		if ((!adev->pm.dpm.thermal_active) &&
>>> -		    (!adev->pm.dpm.uvd_active))
>>> -			adev->pm.dpm.state = adev->pm.dpm.user_state;
>>> -	}
>>> -	dpm_state = adev->pm.dpm.state;
>>> -
>>> -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
>>> -	if (ps)
>>> -		adev->pm.dpm.requested_ps = ps;
>>> -	else
>>> -		return;
>>> -
>>> -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
>>> print_power_state) {
>>> -		printk("switching from power state:\n");
>>> -		amdgpu_dpm_print_power_state(adev, adev-
>>> pm.dpm.current_ps);
>>> -		printk("switching to power state:\n");
>>> -		amdgpu_dpm_print_power_state(adev, adev-
>>> pm.dpm.requested_ps);
>>> -	}
>>> -
>>> -	/* update whether vce is active */
>>> -	ps->vce_active = adev->pm.dpm.vce_active;
>>> -	if (adev->powerplay.pp_funcs->display_configuration_changed)
>>> -		amdgpu_dpm_display_configuration_changed(adev);
>>> -
>>> -	ret = amdgpu_dpm_pre_set_power_state(adev);
>>> -	if (ret)
>>> -		return;
>>> -
>>> -	if (adev->powerplay.pp_funcs->check_state_equal) {
>>> -		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
>>> -			equal = false;
>>> -	}
>>> -
>>> -	if (equal)
>>> -		return;
>>> -
>>> -	if (adev->powerplay.pp_funcs->set_power_state)
>>> -		adev->powerplay.pp_funcs->set_power_state(adev-
>>> powerplay.pp_handle);
>>> -
>>> -	amdgpu_dpm_post_set_power_state(adev);
>>> -
>>> -	adev->pm.dpm.current_active_crtcs = adev-
>>> pm.dpm.new_active_crtcs;
>>> -	adev->pm.dpm.current_active_crtc_count = adev-
>>> pm.dpm.new_active_crtc_count;
>>> -
>>> -	if (adev->powerplay.pp_funcs->force_performance_level) {
>>> -		if (adev->pm.dpm.thermal_active) {
>>> -			enum amd_dpm_forced_level level = adev-
>>> pm.dpm.forced_level;
>>> -			/* force low perf level for thermal */
>>> -			amdgpu_dpm_force_performance_level(adev,
>> AMD_DPM_FORCED_LEVEL_LOW);
>>> -			/* save the user's level */
>>> -			adev->pm.dpm.forced_level = level;
>>> -		} else {
>>> -			/* otherwise, user selected level */
>>> -			amdgpu_dpm_force_performance_level(adev,
>> adev->pm.dpm.forced_level);
>>> -		}
>>> -	}
>>> -}
>>> -
>>>    void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
>>>    {
>>
>> Rename to amdgpu_dpm_compute_clocks?
> [Quan, Evan] Sure, I can do that.
>>
>>>    	int i = 0;
>>> @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct
>> amdgpu_device *adev)
>>>    			amdgpu_fence_wait_empty(ring);
>>>    	}
>>>
>>> -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
>>> +	if ((adev->family == AMDGPU_FAMILY_SI) ||
>>> +	     (adev->family == AMDGPU_FAMILY_KV)) {
>>> +		amdgpu_dpm_get_active_displays(adev);
>>> +		adev->powerplay.pp_funcs->change_power_state(adev-
>>> powerplay.pp_handle);
>>
>> It would be clearer if the newly added logic in this function is in
>> another patch. This does more than what the patch subject says.
> [Quan, Evan] Actually there are no new logic added. These are for "!adev->powerplay.pp_funcs->dispatch_tasks".
> Considering there are actually only SI and KV which do not have ->dispatch_tasks() implemented.
> So, I used "((adev->family == AMDGPU_FAMILY_SI) ||(adev->family == AMDGPU_FAMILY_KV))" here.
> Maybe i should stick with "!adev->powerplay.pp_funcs->dispatch_tasks"?

This change also adds a new callback change_power_state(). I interpreted 
it as something different from what the patch subject says.

>>
>>> +	} else {
>>>    		if (!amdgpu_device_has_dc_support(adev)) {
>>> -			mutex_lock(&adev->pm.mutex);
>>>    			amdgpu_dpm_get_active_displays(adev);
>>>    			adev->pm.pm_display_cfg.num_display = adev-
>>> pm.dpm.new_active_crtc_count;
>>>    			adev->pm.pm_display_cfg.vrefresh =
>> amdgpu_dpm_get_vrefresh(adev);
>>> @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct
>> amdgpu_device *adev)
>>>    				adev->powerplay.pp_funcs-
>>> display_configuration_change(
>>>    							adev-
>>> powerplay.pp_handle,
>>>    							&adev-
>>> pm.pm_display_cfg);
>>> -			mutex_unlock(&adev->pm.mutex);
>>>    		}
>>>    		amdgpu_dpm_dispatch_task(adev,
>> AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
>>> -	} else {
>>> -		mutex_lock(&adev->pm.mutex);
>>> -		amdgpu_dpm_get_active_displays(adev);
>>> -		amdgpu_dpm_change_power_state_locked(adev);
>>> -		mutex_unlock(&adev->pm.mutex);
>>>    	}
>>>    }
>>>
>>> @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct
>> amdgpu_device *adev, bool enable)
>>>    	}
>>>    }
>>>
>>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
>>> -{
>>> -	int i;
>>> -
>>> -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
>>> -		return;
>>> -
>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
>>> -		amdgpu_dpm_print_power_state(adev, &adev-
>>> pm.dpm.ps[i]);
>>> -
>>> -}
>>> -
>>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
>> enable)
>>>    {
>>>    	int ret = 0;
>>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> index 01120b302590..295d2902aef7 100644
>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>> @@ -366,24 +366,10 @@ enum amdgpu_display_gap
>>>        AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
>>>    };
>>>
>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
>>> -void amdgpu_dpm_print_cap_info(u32 caps);
>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>> -				struct amdgpu_ps *rps);
>>>    u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
>>>    int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
>> amd_pp_sensors sensor,
>>>    			   void *data, uint32_t *size);
>>>
>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
>>> -
>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device *adev);
>>> -void amdgpu_free_extended_power_table(struct amdgpu_device
>> *adev);
>>> -
>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
>>> -
>>> -struct amd_vce_state*
>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx);
>>> -
>>>    int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device
>> *adev,
>>>    				      uint32_t block_type, bool gate);
>>>
>>> @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct
>> amdgpu_device *adev);
>>>    void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool
>> enable);
>>>    void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
>> enable);
>>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
>> enable);
>>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>>    int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
>> uint32_t *smu_version);
>>>    int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
>> enable);
>>>    int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
>> *adev, uint32_t size);
>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
>> b/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>> index 0fb114adc79f..614d8b6a58ad 100644
>>> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>> @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix
>> /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
>>>
>>>    include $(AMD_POWERPLAY)
>>>
>>> -POWER_MGR-y = amd_powerplay.o
>>> +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
>>>
>>>    POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>> b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>> index 380a5336c74f..90f4c65659e2 100644
>>> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>> @@ -36,6 +36,7 @@
>>>
>>>    #include "gca/gfx_7_2_d.h"
>>>    #include "gca/gfx_7_2_sh_mask.h"
>>> +#include "legacy_dpm.h"
>>>
>>>    #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
>>>    #define KV_MINIMUM_ENGINE_CLOCK         800
>>> @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs kv_dpm_funcs
>> = {
>>>    	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>>>    	.check_state_equal = kv_check_state_equal,
>>>    	.read_sensor = &kv_dpm_read_sensor,
>>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>>>    };
>>>
>>>    static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>
>> This could get confused with all APIs that support legacy dpms. This
>> file has only a subset of APIs to support legacy dpm. Needs a better
>> name - powerplay_ctrl/powerplay_util ?
> [Quan, Evan] The "legacy_dpm" refers for those logics used only by si/kv(si_dpm.c, kv_dpm.c).
> Considering these logics are not used at default(radeon driver instead of amdgpu driver is used to support those legacy ASICs at default).
> We might drop support for them from our amdgpu driver. So, I gather all those APIs and put them in a new holder.
> Maybe you wrongly treat it as a new holder for powerplay APIs(used by VI/AI)?

As it got moved under powerplay, I thought they were also used in AI/VI 
powerplay. Otherwise, move si/kv along with this out of powerplay and 
keep them separate.

Thanks,
Lijo

> 
> BR
> Evan
>>
>> Thanks,
>> Lijo
>>
>>> new file mode 100644
>>> index 000000000000..9427c1026e1d
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>> @@ -0,0 +1,1453 @@
>>> +/*
>>> + * Copyright 2021 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + */
>>> +
>>> +#include "amdgpu.h"
>>> +#include "amdgpu_atombios.h"
>>> +#include "amdgpu_i2c.h"
>>> +#include "atom.h"
>>> +#include "amd_pcie.h"
>>> +#include "legacy_dpm.h"
>>> +
>>> +#define amdgpu_dpm_pre_set_power_state(adev) \
>>> +		((adev)->powerplay.pp_funcs-
>>> pre_set_power_state((adev)->powerplay.pp_handle))
>>> +
>>> +#define amdgpu_dpm_post_set_power_state(adev) \
>>> +		((adev)->powerplay.pp_funcs-
>>> post_set_power_state((adev)->powerplay.pp_handle))
>>> +
>>> +#define amdgpu_dpm_display_configuration_changed(adev) \
>>> +		((adev)->powerplay.pp_funcs-
>>> display_configuration_changed((adev)->powerplay.pp_handle))
>>> +
>>> +#define amdgpu_dpm_print_power_state(adev, ps) \
>>> +		((adev)->powerplay.pp_funcs->print_power_state((adev)-
>>> powerplay.pp_handle, (ps)))
>>> +
>>> +#define amdgpu_dpm_vblank_too_short(adev) \
>>> +		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
>>> powerplay.pp_handle))
>>> +
>>> +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
>>> +		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
>>> powerplay.pp_handle, (cps), (rps), (equal)))
>>> +
>>> +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
>> *adev,
>>> +					    u32 clock,
>>> +					    bool strobe_mode,
>>> +					    struct atom_mpll_param
>> *mpll_param)
>>> +{
>>> +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
>>> +	int index = GetIndexIntoMasterTable(COMMAND,
>> ComputeMemoryClockParam);
>>> +	u8 frev, crev;
>>> +
>>> +	memset(&args, 0, sizeof(args));
>>> +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
>>> +
>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
>>> mode_info.atom_context, index, &frev, &crev))
>>> +		return -EINVAL;
>>> +
>>> +	switch (frev) {
>>> +	case 2:
>>> +		switch (crev) {
>>> +		case 1:
>>> +			/* SI */
>>> +			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
>>> +			args.ucInputFlag = 0;
>>> +			if (strobe_mode)
>>> +				args.ucInputFlag |=
>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
>>> +
>>> +			amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> +
>>> +			mpll_param->clkfrac =
>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
>>> +			mpll_param->clkf =
>> le16_to_cpu(args.ulFbDiv.usFbDiv);
>>> +			mpll_param->post_div = args.ucPostDiv;
>>> +			mpll_param->dll_speed = args.ucDllSpeed;
>>> +			mpll_param->bwcntl = args.ucBWCntl;
>>> +			mpll_param->vco_mode =
>>> +				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
>>> +			mpll_param->yclk_sel =
>>> +				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
>>> +			mpll_param->qdr =
>>> +				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
>>> +			mpll_param->half_rate =
>>> +				(args.ucPllCntlFlag &
>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
>>> +			break;
>>> +		default:
>>> +			return -EINVAL;
>>> +		}
>>> +		break;
>>> +	default:
>>> +		return -EINVAL;
>>> +	}
>>> +	return 0;
>>> +}
>>> +
>>> +void amdgpu_atombios_set_engine_dram_timings(struct
>> amdgpu_device *adev,
>>> +					     u32 eng_clock, u32 mem_clock)
>>> +{
>>> +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
>>> +	int index = GetIndexIntoMasterTable(COMMAND,
>> DynamicMemorySettings);
>>> +	u32 tmp;
>>> +
>>> +	memset(&args, 0, sizeof(args));
>>> +
>>> +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
>>> +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
>>> +
>>> +	args.ulTargetEngineClock = cpu_to_le32(tmp);
>>> +	if (mem_clock)
>>> +		args.sReserved.ulClock = cpu_to_le32(mem_clock &
>> SET_CLOCK_FREQ_MASK);
>>> +
>>> +	amdgpu_atom_execute_table(adev->mode_info.atom_context,
>> index, (uint32_t *)&args);
>>> +}
>>> +
>>> +union firmware_info {
>>> +	ATOM_FIRMWARE_INFO info;
>>> +	ATOM_FIRMWARE_INFO_V1_2 info_12;
>>> +	ATOM_FIRMWARE_INFO_V1_3 info_13;
>>> +	ATOM_FIRMWARE_INFO_V1_4 info_14;
>>> +	ATOM_FIRMWARE_INFO_V2_1 info_21;
>>> +	ATOM_FIRMWARE_INFO_V2_2 info_22;
>>> +};
>>> +
>>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>> *adev,
>>> +					  u16 *vddc, u16 *vddci, u16 *mvdd)
>>> +{
>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
>>> +	u8 frev, crev;
>>> +	u16 data_offset;
>>> +	union firmware_info *firmware_info;
>>> +
>>> +	*vddc = 0;
>>> +	*vddci = 0;
>>> +	*mvdd = 0;
>>> +
>>> +	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> +				   &frev, &crev, &data_offset)) {
>>> +		firmware_info =
>>> +			(union firmware_info *)(mode_info->atom_context-
>>> bios +
>>> +						data_offset);
>>> +		*vddc = le16_to_cpu(firmware_info-
>>> info_14.usBootUpVDDCVoltage);
>>> +		if ((frev == 2) && (crev >= 2)) {
>>> +			*vddci = le16_to_cpu(firmware_info-
>>> info_22.usBootUpVDDCIVoltage);
>>> +			*mvdd = le16_to_cpu(firmware_info-
>>> info_22.usBootUpMVDDCVoltage);
>>> +		}
>>> +	}
>>> +}
>>> +
>>> +union set_voltage {
>>> +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
>>> +	struct _SET_VOLTAGE_PARAMETERS v1;
>>> +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
>>> +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
>>> +};
>>> +
>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>> voltage_type,
>>> +			     u16 voltage_id, u16 *voltage)
>>> +{
>>> +	union set_voltage args;
>>> +	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
>>> +	u8 frev, crev;
>>> +
>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
>>> mode_info.atom_context, index, &frev, &crev))
>>> +		return -EINVAL;
>>> +
>>> +	switch (crev) {
>>> +	case 1:
>>> +		return -EINVAL;
>>> +	case 2:
>>> +		args.v2.ucVoltageType =
>> SET_VOLTAGE_GET_MAX_VOLTAGE;
>>> +		args.v2.ucVoltageMode = 0;
>>> +		args.v2.usVoltageLevel = 0;
>>> +
>>> +		amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> +
>>> +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
>>> +		break;
>>> +	case 3:
>>> +		args.v3.ucVoltageType = voltage_type;
>>> +		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
>>> +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
>>> +
>>> +		amdgpu_atom_execute_table(adev-
>>> mode_info.atom_context, index, (uint32_t *)&args);
>>> +
>>> +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
>>> +		break;
>>> +	default:
>>> +		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>> amdgpu_device *adev,
>>> +						      u16 *voltage,
>>> +						      u16 leakage_idx)
>>> +{
>>> +	return amdgpu_atombios_get_max_vddc(adev,
>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
>>> +}
>>> +
>>> +union voltage_object_info {
>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
>>> +};
>>> +
>>> +union voltage_object {
>>> +	struct _ATOM_VOLTAGE_OBJECT v1;
>>> +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
>>> +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
>>> +};
>>> +
>>> +static ATOM_VOLTAGE_OBJECT_V3
>> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
>> NFO_V3_1 *v3,
>>> +									u8
>> voltage_type, u8 voltage_mode)
>>> +{
>>> +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
>>> +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
>> asVoltageObj[0]);
>>> +	u8 *start = (u8 *)v3;
>>> +
>>> +	while (offset < size) {
>>> +		ATOM_VOLTAGE_OBJECT_V3 *vo =
>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
>>> +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
>> voltage_type) &&
>>> +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
>> voltage_mode))
>>> +			return vo;
>>> +		offset += le16_to_cpu(vo-
>>> asGpioVoltageObj.sHeader.usSize);
>>> +	}
>>> +	return NULL;
>>> +}
>>> +
>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>> +			      u8 voltage_type,
>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
>>> +{
>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> +	u8 frev, crev;
>>> +	u16 data_offset, size;
>>> +	union voltage_object_info *voltage_info;
>>> +	union voltage_object *voltage_object = NULL;
>>> +
>>> +	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> +				   &frev, &crev, &data_offset)) {
>>> +		voltage_info = (union voltage_object_info *)
>>> +			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> +
>>> +		switch (frev) {
>>> +		case 3:
>>> +			switch (crev) {
>>> +			case 1:
>>> +				voltage_object = (union voltage_object *)
>>> +
>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> +
>> voltage_type,
>>> +
>> VOLTAGE_OBJ_SVID2);
>>> +				if (voltage_object) {
>>> +					*svd_gpio_id = voltage_object-
>>> v3.asSVID2Obj.ucSVDGpioId;
>>> +					*svc_gpio_id = voltage_object-
>>> v3.asSVID2Obj.ucSVCGpioId;
>>> +				} else {
>>> +					return -EINVAL;
>>> +				}
>>> +				break;
>>> +			default:
>>> +				DRM_ERROR("unknown voltage object
>> table\n");
>>> +				return -EINVAL;
>>> +			}
>>> +			break;
>>> +		default:
>>> +			DRM_ERROR("unknown voltage object table\n");
>>> +			return -EINVAL;
>>> +		}
>>> +
>>> +	}
>>> +	return 0;
>>> +}
>>> +
>>> +bool
>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>> +				u8 voltage_type, u8 voltage_mode)
>>> +{
>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> +	u8 frev, crev;
>>> +	u16 data_offset, size;
>>> +	union voltage_object_info *voltage_info;
>>> +
>>> +	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> +				   &frev, &crev, &data_offset)) {
>>> +		voltage_info = (union voltage_object_info *)
>>> +			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> +
>>> +		switch (frev) {
>>> +		case 3:
>>> +			switch (crev) {
>>> +			case 1:
>>> +				if
>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> +
>> voltage_type, voltage_mode))
>>> +					return true;
>>> +				break;
>>> +			default:
>>> +				DRM_ERROR("unknown voltage object
>> table\n");
>>> +				return false;
>>> +			}
>>> +			break;
>>> +		default:
>>> +			DRM_ERROR("unknown voltage object table\n");
>>> +			return false;
>>> +		}
>>> +
>>> +	}
>>> +	return false;
>>> +}
>>> +
>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>> +				      u8 voltage_type, u8 voltage_mode,
>>> +				      struct atom_voltage_table *voltage_table)
>>> +{
>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>> +	u8 frev, crev;
>>> +	u16 data_offset, size;
>>> +	int i;
>>> +	union voltage_object_info *voltage_info;
>>> +	union voltage_object *voltage_object = NULL;
>>> +
>>> +	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> +				   &frev, &crev, &data_offset)) {
>>> +		voltage_info = (union voltage_object_info *)
>>> +			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> +
>>> +		switch (frev) {
>>> +		case 3:
>>> +			switch (crev) {
>>> +			case 1:
>>> +				voltage_object = (union voltage_object *)
>>> +
>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>> +
>> voltage_type, voltage_mode);
>>> +				if (voltage_object) {
>>> +					ATOM_GPIO_VOLTAGE_OBJECT_V3
>> *gpio =
>>> +						&voltage_object-
>>> v3.asGpioVoltageObj;
>>> +					VOLTAGE_LUT_ENTRY_V2 *lut;
>>> +					if (gpio->ucGpioEntryNum >
>> MAX_VOLTAGE_ENTRIES)
>>> +						return -EINVAL;
>>> +					lut = &gpio->asVolGpioLut[0];
>>> +					for (i = 0; i < gpio->ucGpioEntryNum;
>> i++) {
>>> +						voltage_table-
>>> entries[i].value =
>>> +							le16_to_cpu(lut-
>>> usVoltageValue);
>>> +						voltage_table-
>>> entries[i].smio_low =
>>> +							le32_to_cpu(lut-
>>> ulVoltageId);
>>> +						lut =
>> (VOLTAGE_LUT_ENTRY_V2 *)
>>> +							((u8 *)lut +
>> sizeof(VOLTAGE_LUT_ENTRY_V2));
>>> +					}
>>> +					voltage_table->mask_low =
>> le32_to_cpu(gpio->ulGpioMaskVal);
>>> +					voltage_table->count = gpio-
>>> ucGpioEntryNum;
>>> +					voltage_table->phase_delay = gpio-
>>> ucPhaseDelay;
>>> +					return 0;
>>> +				}
>>> +				break;
>>> +			default:
>>> +				DRM_ERROR("unknown voltage object
>> table\n");
>>> +				return -EINVAL;
>>> +			}
>>> +			break;
>>> +		default:
>>> +			DRM_ERROR("unknown voltage object table\n");
>>> +			return -EINVAL;
>>> +		}
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +union vram_info {
>>> +	struct _ATOM_VRAM_INFO_V3 v1_3;
>>> +	struct _ATOM_VRAM_INFO_V4 v1_4;
>>> +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
>>> +};
>>> +
>>> +#define MEM_ID_MASK           0xff000000
>>> +#define MEM_ID_SHIFT          24
>>> +#define CLOCK_RANGE_MASK      0x00ffffff
>>> +#define CLOCK_RANGE_SHIFT     0
>>> +#define LOW_NIBBLE_MASK       0xf
>>> +#define DATA_EQU_PREV         0
>>> +#define DATA_FROM_TABLE       4
>>> +
>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>> +				      u8 module_index,
>>> +				      struct atom_mc_reg_table *reg_table)
>>> +{
>>> +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
>>> +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
>>> +	u32 i = 0, j;
>>> +	u16 data_offset, size;
>>> +	union vram_info *vram_info;
>>> +
>>> +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
>>> +
>>> +	if (amdgpu_atom_parse_data_header(adev-
>>> mode_info.atom_context, index, &size,
>>> +				   &frev, &crev, &data_offset)) {
>>> +		vram_info = (union vram_info *)
>>> +			(adev->mode_info.atom_context->bios +
>> data_offset);
>>> +		switch (frev) {
>>> +		case 1:
>>> +			DRM_ERROR("old table version %d, %d\n", frev,
>> crev);
>>> +			return -EINVAL;
>>> +		case 2:
>>> +			switch (crev) {
>>> +			case 1:
>>> +				if (module_index < vram_info-
>>> v2_1.ucNumOfVRAMModule) {
>>> +					ATOM_INIT_REG_BLOCK *reg_block
>> =
>>> +						(ATOM_INIT_REG_BLOCK *)
>>> +						((u8 *)vram_info +
>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
>>> +
>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
>>> +
>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>> +						((u8 *)reg_block + (2 *
>> sizeof(u16)) +
>>> +						 le16_to_cpu(reg_block-
>>> usRegIndexTblSize));
>>> +					ATOM_INIT_REG_INDEX_FORMAT
>> *format = &reg_block->asRegIndexBuf[0];
>>> +					num_entries =
>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
>>> +
>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
>>> +					if (num_entries >
>> VBIOS_MC_REGISTER_ARRAY_SIZE)
>>> +						return -EINVAL;
>>> +					while (i < num_entries) {
>>> +						if (format-
>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
>>> +							break;
>>> +						reg_table-
>>> mc_reg_address[i].s1 =
>>> +
>> 	(u16)(le16_to_cpu(format->usRegIndex));
>>> +						reg_table-
>>> mc_reg_address[i].pre_reg_data =
>>> +							(u8)(format-
>>> ucPreRegDataLength);
>>> +						i++;
>>> +						format =
>> (ATOM_INIT_REG_INDEX_FORMAT *)
>>> +							((u8 *)format +
>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
>>> +					}
>>> +					reg_table->last = i;
>>> +					while ((le32_to_cpu(*(u32
>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
>>> +					       (num_ranges <
>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
>>> +						t_mem_id =
>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
>>> +								>>
>> MEM_ID_SHIFT);
>>> +						if (module_index ==
>> t_mem_id) {
>>> +							reg_table-
>>> mc_reg_table_entry[num_ranges].mclk_max =
>>> +
>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
>>> +								      >>
>> CLOCK_RANGE_SHIFT);
>>> +							for (i = 0, j = 1; i <
>> reg_table->last; i++) {
>>> +								if ((reg_table-
>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>> DATA_FROM_TABLE) {
>>> +
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>> +
>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
>>> +									j++;
>>> +								} else if
>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>> DATA_EQU_PREV) {
>>> +
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>> +
>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
>>> +								}
>>> +							}
>>> +							num_ranges++;
>>> +						}
>>> +						reg_data =
>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>> +							((u8 *)reg_data +
>> le16_to_cpu(reg_block->usRegDataBlkSize));
>>> +					}
>>> +					if (le32_to_cpu(*(u32 *)reg_data) !=
>> END_OF_REG_DATA_BLOCK)
>>> +						return -EINVAL;
>>> +					reg_table->num_entries =
>> num_ranges;
>>> +				} else
>>> +					return -EINVAL;
>>> +				break;
>>> +			default:
>>> +				DRM_ERROR("Unknown table
>> version %d, %d\n", frev, crev);
>>> +				return -EINVAL;
>>> +			}
>>> +			break;
>>> +		default:
>>> +			DRM_ERROR("Unknown table version %d, %d\n",
>> frev, crev);
>>> +			return -EINVAL;
>>> +		}
>>> +		return 0;
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
>>> +{
>>> +	const char *s;
>>> +
>>> +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
>>> +	default:
>>> +		s = "none";
>>> +		break;
>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
>>> +		s = "battery";
>>> +		break;
>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
>>> +		s = "balanced";
>>> +		break;
>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
>>> +		s = "performance";
>>> +		break;
>>> +	}
>>> +	printk("\tui class: %s\n", s);
>>> +	printk("\tinternal class:");
>>> +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
>>> +	    (class2 == 0))
>>> +		pr_cont(" none");
>>> +	else {
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
>>> +			pr_cont(" boot");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>> +			pr_cont(" thermal");
>>> +		if (class &
>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
>>> +			pr_cont(" limited_pwr");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
>>> +			pr_cont(" rest");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
>>> +			pr_cont(" forced");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>> +			pr_cont(" 3d_perf");
>>> +		if (class &
>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
>>> +			pr_cont(" ovrdrv");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
>>> +			pr_cont(" uvd");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
>>> +			pr_cont(" 3d_low");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>> +			pr_cont(" acpi");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>> +			pr_cont(" uvd_hd2");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>> +			pr_cont(" uvd_hd");
>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>> +			pr_cont(" uvd_sd");
>>> +		if (class2 &
>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
>>> +			pr_cont(" limited_pwr2");
>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>> +			pr_cont(" ulv");
>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
>>> +			pr_cont(" uvd_mvc");
>>> +	}
>>> +	pr_cont("\n");
>>> +}
>>> +
>>> +void amdgpu_dpm_print_cap_info(u32 caps)
>>> +{
>>> +	printk("\tcaps:");
>>> +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
>>> +		pr_cont(" single_disp");
>>> +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
>>> +		pr_cont(" video");
>>> +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
>>> +		pr_cont(" no_dc");
>>> +	pr_cont("\n");
>>> +}
>>> +
>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>> +				struct amdgpu_ps *rps)
>>> +{
>>> +	printk("\tstatus:");
>>> +	if (rps == adev->pm.dpm.current_ps)
>>> +		pr_cont(" c");
>>> +	if (rps == adev->pm.dpm.requested_ps)
>>> +		pr_cont(" r");
>>> +	if (rps == adev->pm.dpm.boot_ps)
>>> +		pr_cont(" b");
>>> +	pr_cont("\n");
>>> +}
>>> +
>>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
>>> +{
>>> +	int i;
>>> +
>>> +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
>>> +		return;
>>> +
>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
>>> +		amdgpu_dpm_print_power_state(adev, &adev-
>>> pm.dpm.ps[i]);
>>> +
>>> +}
>>> +
>>> +union power_info {
>>> +	struct _ATOM_POWERPLAY_INFO info;
>>> +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
>>> +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
>>> +};
>>> +
>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
>>> +{
>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> +	union power_info *power_info;
>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> +	u16 data_offset;
>>> +	u8 frev, crev;
>>> +
>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> +				   &frev, &crev, &data_offset))
>>> +		return -EINVAL;
>>> +	power_info = (union power_info *)(mode_info->atom_context-
>>> bios + data_offset);
>>> +
>>> +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
>>> pplib.ulPlatformCaps);
>>> +	adev->pm.dpm.backbias_response_time =
>> le16_to_cpu(power_info->pplib.usBackbiasTime);
>>> +	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
>>> pplib.usVoltageTime);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +union fan_info {
>>> +	struct _ATOM_PPLIB_FANTABLE fan;
>>> +	struct _ATOM_PPLIB_FANTABLE2 fan2;
>>> +	struct _ATOM_PPLIB_FANTABLE3 fan3;
>>> +};
>>> +
>>> +static int amdgpu_parse_clk_voltage_dep_table(struct
>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
>>> +
>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
>>> +{
>>> +	u32 size = atom_table->ucNumEntries *
>>> +		sizeof(struct amdgpu_clock_voltage_dependency_entry);
>>> +	int i;
>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
>>> +
>>> +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
>>> +	if (!amdgpu_table->entries)
>>> +		return -ENOMEM;
>>> +
>>> +	entry = &atom_table->entries[0];
>>> +	for (i = 0; i < atom_table->ucNumEntries; i++) {
>>> +		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
>>> usClockLow) |
>>> +			(entry->ucClockHigh << 16);
>>> +		amdgpu_table->entries[i].v = le16_to_cpu(entry-
>>> usVoltage);
>>> +		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
>> *)
>>> +			((u8 *)entry +
>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
>>> +	}
>>> +	amdgpu_table->count = atom_table->ucNumEntries;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
>>> +
>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
>>> +{
>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> +	union power_info *power_info;
>>> +	union fan_info *fan_info;
>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> +	u16 data_offset;
>>> +	u8 frev, crev;
>>> +	int ret, i;
>>> +
>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> +				   &frev, &crev, &data_offset))
>>> +		return -EINVAL;
>>> +	power_info = (union power_info *)(mode_info->atom_context-
>>> bios + data_offset);
>>> +
>>> +	/* fan table */
>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>> +		if (power_info->pplib3.usFanTableOffset) {
>>> +			fan_info = (union fan_info *)(mode_info-
>>> atom_context->bios + data_offset +
>>> +						      le16_to_cpu(power_info-
>>> pplib3.usFanTableOffset));
>>> +			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
>>> +			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
>>> fan.usTMin);
>>> +			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
>>> fan.usTMed);
>>> +			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
>>> fan.usTHigh);
>>> +			adev->pm.dpm.fan.pwm_min =
>> le16_to_cpu(fan_info->fan.usPWMMin);
>>> +			adev->pm.dpm.fan.pwm_med =
>> le16_to_cpu(fan_info->fan.usPWMMed);
>>> +			adev->pm.dpm.fan.pwm_high =
>> le16_to_cpu(fan_info->fan.usPWMHigh);
>>> +			if (fan_info->fan.ucFanTableFormat >= 2)
>>> +				adev->pm.dpm.fan.t_max =
>> le16_to_cpu(fan_info->fan2.usTMax);
>>> +			else
>>> +				adev->pm.dpm.fan.t_max = 10900;
>>> +			adev->pm.dpm.fan.cycle_delay = 100000;
>>> +			if (fan_info->fan.ucFanTableFormat >= 3) {
>>> +				adev->pm.dpm.fan.control_mode =
>> fan_info->fan3.ucFanControlMode;
>>> +				adev->pm.dpm.fan.default_max_fan_pwm
>> =
>>> +					le16_to_cpu(fan_info-
>>> fan3.usFanPWMMax);
>>> +				adev-
>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
>>> +				adev->pm.dpm.fan.fan_output_sensitivity =
>>> +					le16_to_cpu(fan_info-
>>> fan3.usFanOutputSensitivity);
>>> +			}
>>> +			adev->pm.dpm.fan.ucode_fan_control = true;
>>> +		}
>>> +	}
>>> +
>>> +	/* clock dependancy tables, shedding tables */
>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
>>> +		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
>>> +			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usVddcDependencyOnSCLKOffset));
>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
>>> +								 dep_table);
>>> +			if (ret) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return ret;
>>> +			}
>>> +		}
>>> +		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
>>> +			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usVddciDependencyOnMCLKOffset));
>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
>>> +								 dep_table);
>>> +			if (ret) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return ret;
>>> +			}
>>> +		}
>>> +		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
>>> +			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usVddcDependencyOnMCLKOffset));
>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
>>> +								 dep_table);
>>> +			if (ret) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return ret;
>>> +			}
>>> +		}
>>> +		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
>> {
>>> +			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usMvddDependencyOnMCLKOffset));
>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
>>> +								 dep_table);
>>> +			if (ret) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return ret;
>>> +			}
>>> +		}
>>> +		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
>>> +			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
>>> +				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usMaxClockVoltageOnDCOffset));
>>> +			if (clk_v->ucNumEntries) {
>>> +				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
>>> +					le16_to_cpu(clk_v-
>>> entries[0].usSclkLow) |
>>> +					(clk_v->entries[0].ucSclkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
>>> +					le16_to_cpu(clk_v-
>>> entries[0].usMclkLow) |
>>> +					(clk_v->entries[0].ucMclkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
>>> +					le16_to_cpu(clk_v-
>>> entries[0].usVddc);
>>> +				adev-
>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
>>> +					le16_to_cpu(clk_v-
>>> entries[0].usVddci);
>>> +			}
>>> +		}
>>> +		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
>> {
>>> +			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
>>> +				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib4.usVddcPhaseShedLimitsTableOffset));
>>> +			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
>>> +
>>> +			adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
>>> +				kcalloc(psl->ucNumEntries,
>>> +					sizeof(struct
>> amdgpu_phase_shedding_limits_entry),
>>> +					GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +
>>> +			entry = &psl->entries[0];
>>> +			for (i = 0; i < psl->ucNumEntries; i++) {
>>> +				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
>>> +					le16_to_cpu(entry->usSclkLow) |
>> (entry->ucSclkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
>>> +					le16_to_cpu(entry->usMclkLow) |
>> (entry->ucMclkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
>>> +					le16_to_cpu(entry->usVoltage);
>>> +				entry =
>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
>>> +			}
>>> +			adev-
>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
>>> +				psl->ucNumEntries;
>>> +		}
>>> +	}
>>> +
>>> +	/* cac data */
>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
>>> +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
>>> pplib5.ulTDPLimit);
>>> +		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
>>> pplib5.ulNearTDPLimit);
>>> +		adev->pm.dpm.near_tdp_limit_adjusted = adev-
>>> pm.dpm.near_tdp_limit;
>>> +		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
>>> pplib5.usTDPODLimit);
>>> +		if (adev->pm.dpm.tdp_od_limit)
>>> +			adev->pm.dpm.power_control = true;
>>> +		else
>>> +			adev->pm.dpm.power_control = false;
>>> +		adev->pm.dpm.tdp_adjustment = 0;
>>> +		adev->pm.dpm.sq_ramping_threshold =
>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
>>> +		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
>>> pplib5.ulCACLeakage);
>>> +		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
>>> pplib5.usLoadLineSlope);
>>> +		if (power_info->pplib5.usCACLeakageTableOffset) {
>>> +			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
>>> +				(ATOM_PPLIB_CAC_Leakage_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(power_info-
>>> pplib5.usCACLeakageTableOffset));
>>> +			ATOM_PPLIB_CAC_Leakage_Record *entry;
>>> +			u32 size = cac_table->ucNumEntries * sizeof(struct
>> amdgpu_cac_leakage_table);
>>> +			adev->pm.dpm.dyn_state.cac_leakage_table.entries
>> = kzalloc(size, GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			entry = &cac_table->entries[0];
>>> +			for (i = 0; i < cac_table->ucNumEntries; i++) {
>>> +				if (adev->pm.dpm.platform_caps &
>> ATOM_PP_PLATFORM_CAP_EVV) {
>>> +					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
>>> +						le16_to_cpu(entry-
>>> usVddc1);
>>> +					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
>>> +						le16_to_cpu(entry-
>>> usVddc2);
>>> +					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
>>> +						le16_to_cpu(entry-
>>> usVddc3);
>>> +				} else {
>>> +					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
>>> +						le16_to_cpu(entry->usVddc);
>>> +					adev-
>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
>>> +						le32_to_cpu(entry-
>>> ulLeakageValue);
>>> +				}
>>> +				entry = (ATOM_PPLIB_CAC_Leakage_Record
>> *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
>>> +			}
>>> +			adev->pm.dpm.dyn_state.cac_leakage_table.count
>> = cac_table->ucNumEntries;
>>> +		}
>>> +	}
>>> +
>>> +	/* ext tables */
>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>> +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
>> (ATOM_PPLIB_EXTENDEDHEADER *)
>>> +			(mode_info->atom_context->bios + data_offset +
>>> +			 le16_to_cpu(power_info-
>>> pplib3.usExtendendedHeaderOffset));
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
>>> +			ext_hdr->usVCETableOffset) {
>>> +			VCEClockInfoArray *array = (VCEClockInfoArray *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1);
>>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
>> *limits =
>>> +
>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1 +
>>> +				 1 + array->ucNumEntries *
>> sizeof(VCEClockInfo));
>>> +			ATOM_PPLIB_VCE_State_Table *states =
>>> +				(ATOM_PPLIB_VCE_State_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>> 1 +
>>> +				 1 + (array->ucNumEntries * sizeof
>> (VCEClockInfo)) +
>>> +				 1 + (limits->numEntries *
>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
>>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
>> *entry;
>>> +			ATOM_PPLIB_VCE_State_Record *state_entry;
>>> +			VCEClockInfo *vce_clk;
>>> +			u32 size = limits->numEntries *
>>> +				sizeof(struct
>> amdgpu_vce_clock_voltage_dependency_entry);
>>> +			adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
>>> +				kzalloc(size, GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
>>> +				limits->numEntries;
>>> +			entry = &limits->entries[0];
>>> +			state_entry = &states->entries[0];
>>> +			for (i = 0; i < limits->numEntries; i++) {
>>> +				vce_clk = (VCEClockInfo *)
>>> +					((u8 *)&array->entries[0] +
>>> +					 (entry->ucVCEClockInfoIndex *
>> sizeof(VCEClockInfo)));
>>> +				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
>> =
>>> +					le16_to_cpu(vce_clk->usEVClkLow) |
>> (vce_clk->ucEVClkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
>> =
>>> +					le16_to_cpu(vce_clk->usECClkLow) |
>> (vce_clk->ucECClkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
>>> +					le16_to_cpu(entry->usVoltage);
>>> +				entry =
>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
>>> +			}
>>> +			adev->pm.dpm.num_of_vce_states =
>>> +					states->numEntries >
>> AMD_MAX_VCE_LEVELS ?
>>> +					AMD_MAX_VCE_LEVELS : states-
>>> numEntries;
>>> +			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
>> {
>>> +				vce_clk = (VCEClockInfo *)
>>> +					((u8 *)&array->entries[0] +
>>> +					 (state_entry->ucVCEClockInfoIndex
>> * sizeof(VCEClockInfo)));
>>> +				adev->pm.dpm.vce_states[i].evclk =
>>> +					le16_to_cpu(vce_clk->usEVClkLow) |
>> (vce_clk->ucEVClkHigh << 16);
>>> +				adev->pm.dpm.vce_states[i].ecclk =
>>> +					le16_to_cpu(vce_clk->usECClkLow) |
>> (vce_clk->ucECClkHigh << 16);
>>> +				adev->pm.dpm.vce_states[i].clk_idx =
>>> +					state_entry->ucClockInfoIndex &
>> 0x3f;
>>> +				adev->pm.dpm.vce_states[i].pstate =
>>> +					(state_entry->ucClockInfoIndex &
>> 0xc0) >> 6;
>>> +				state_entry =
>> (ATOM_PPLIB_VCE_State_Record *)
>>> +					((u8 *)state_entry +
>> sizeof(ATOM_PPLIB_VCE_State_Record));
>>> +			}
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
>>> +			ext_hdr->usUVDTableOffset) {
>>> +			UVDClockInfoArray *array = (UVDClockInfoArray *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>> 1);
>>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
>> *limits =
>>> +
>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>> 1 +
>>> +				 1 + (array->ucNumEntries * sizeof
>> (UVDClockInfo)));
>>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
>> *entry;
>>> +			u32 size = limits->numEntries *
>>> +				sizeof(struct
>> amdgpu_uvd_clock_voltage_dependency_entry);
>>> +			adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
>>> +				kzalloc(size, GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
>>> +				limits->numEntries;
>>> +			entry = &limits->entries[0];
>>> +			for (i = 0; i < limits->numEntries; i++) {
>>> +				UVDClockInfo *uvd_clk = (UVDClockInfo *)
>>> +					((u8 *)&array->entries[0] +
>>> +					 (entry->ucUVDClockInfoIndex *
>> sizeof(UVDClockInfo)));
>>> +				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
>>> +					le16_to_cpu(uvd_clk->usVClkLow) |
>> (uvd_clk->ucVClkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
>>> +					le16_to_cpu(uvd_clk->usDClkLow) |
>> (uvd_clk->ucDClkHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
>>> +					le16_to_cpu(entry->usVoltage);
>>> +				entry =
>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
>>> +			}
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
>>> +			ext_hdr->usSAMUTableOffset) {
>>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
>>> +				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
>> *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
>> + 1);
>>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
>>> +			u32 size = limits->numEntries *
>>> +				sizeof(struct
>> amdgpu_clock_voltage_dependency_entry);
>>> +			adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
>>> +				kzalloc(size, GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
>>> +				limits->numEntries;
>>> +			entry = &limits->entries[0];
>>> +			for (i = 0; i < limits->numEntries; i++) {
>>> +				adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
>>> +					le16_to_cpu(entry->usSAMClockLow)
>> | (entry->ucSAMClockHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v =
>>> +					le16_to_cpu(entry->usVoltage);
>>> +				entry =
>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
>>> +			}
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
>>> +		    ext_hdr->usPPMTableOffset) {
>>> +			ATOM_PPLIB_PPM_Table *ppm =
>> (ATOM_PPLIB_PPM_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usPPMTableOffset));
>>> +			adev->pm.dpm.dyn_state.ppm_table =
>>> +				kzalloc(sizeof(struct amdgpu_ppm_table),
>> GFP_KERNEL);
>>> +			if (!adev->pm.dpm.dyn_state.ppm_table) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			adev->pm.dpm.dyn_state.ppm_table->ppm_design
>> = ppm->ucPpmDesign;
>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>> cpu_core_number =
>>> +				le16_to_cpu(ppm->usCpuCoreNumber);
>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>> platform_tdp =
>>> +				le32_to_cpu(ppm->ulPlatformTDP);
>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>> small_ac_platform_tdp =
>>> +				le32_to_cpu(ppm->ulSmallACPlatformTDP);
>>> +			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
>> =
>>> +				le32_to_cpu(ppm->ulPlatformTDC);
>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>> small_ac_platform_tdc =
>>> +				le32_to_cpu(ppm->ulSmallACPlatformTDC);
>>> +			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
>>> +				le32_to_cpu(ppm->ulApuTDP);
>>> +			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
>>> +				le32_to_cpu(ppm->ulDGpuTDP);
>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>> dgpu_ulv_power =
>>> +				le32_to_cpu(ppm->ulDGpuUlvPower);
>>> +			adev->pm.dpm.dyn_state.ppm_table->tj_max =
>>> +				le32_to_cpu(ppm->ulTjmax);
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
>>> +			ext_hdr->usACPTableOffset) {
>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
>>> +				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
>> *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr->usACPTableOffset) +
>> 1);
>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
>>> +			u32 size = limits->numEntries *
>>> +				sizeof(struct
>> amdgpu_clock_voltage_dependency_entry);
>>> +			adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
>>> +				kzalloc(size, GFP_KERNEL);
>>> +			if (!adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
>>> +				limits->numEntries;
>>> +			entry = &limits->entries[0];
>>> +			for (i = 0; i < limits->numEntries; i++) {
>>> +				adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk =
>>> +					le16_to_cpu(entry->usACPClockLow)
>> | (entry->ucACPClockHigh << 16);
>>> +				adev-
>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
>>> +					le16_to_cpu(entry->usVoltage);
>>> +				entry =
>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
>>> +					((u8 *)entry +
>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
>>> +			}
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
>>> +			ext_hdr->usPowerTuneTableOffset) {
>>> +			u8 rev = *(u8 *)(mode_info->atom_context->bios +
>> data_offset +
>>> +					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> +			ATOM_PowerTune_Table *pt;
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table =
>>> +				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
>> GFP_KERNEL);
>>> +			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
>>> +
>> 	amdgpu_free_extended_power_table(adev);
>>> +				return -ENOMEM;
>>> +			}
>>> +			if (rev > 0) {
>>> +				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
>>> +					(mode_info->atom_context->bios +
>> data_offset +
>>> +					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
>>> maximum_power_delivery_limit =
>>> +					ppt->usMaximumPowerDeliveryLimit;
>>> +				pt = &ppt->power_tune_table;
>>> +			} else {
>>> +				ATOM_PPLIB_POWERTUNE_Table *ppt =
>> (ATOM_PPLIB_POWERTUNE_Table *)
>>> +					(mode_info->atom_context->bios +
>> data_offset +
>>> +					 le16_to_cpu(ext_hdr-
>>> usPowerTuneTableOffset));
>>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
>>> maximum_power_delivery_limit = 255;
>>> +				pt = &ppt->power_tune_table;
>>> +			}
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
>> le16_to_cpu(pt->usTDP);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> configurable_tdp =
>>> +				le16_to_cpu(pt->usConfigurableTDP);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
>> le16_to_cpu(pt->usTDC);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> battery_power_limit =
>>> +				le16_to_cpu(pt->usBatteryPowerLimit);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> small_power_limit =
>>> +				le16_to_cpu(pt->usSmallPowerLimit);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> low_cac_leakage =
>>> +				le16_to_cpu(pt->usLowCACLeakage);
>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>> high_cac_leakage =
>>> +				le16_to_cpu(pt->usHighCACLeakage);
>>> +		}
>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
>>> +				ext_hdr->usSclkVddgfxTableOffset) {
>>> +			dep_table =
>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>> +				(mode_info->atom_context->bios +
>> data_offset +
>>> +				 le16_to_cpu(ext_hdr-
>>> usSclkVddgfxTableOffset));
>>> +			ret = amdgpu_parse_clk_voltage_dep_table(
>>> +					&adev-
>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
>>> +					dep_table);
>>> +			if (ret) {
>>> +				kfree(adev-
>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
>>> +				return ret;
>>> +			}
>>> +		}
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +void amdgpu_free_extended_power_table(struct amdgpu_device *adev)
>>> +{
>>> +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
>>> pm.dpm.dyn_state;
>>> +
>>> +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
>>> +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
>>> +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
>>> +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
>>> +	kfree(dyn_state->cac_leakage_table.entries);
>>> +	kfree(dyn_state->phase_shedding_limits_table.entries);
>>> +	kfree(dyn_state->ppm_table);
>>> +	kfree(dyn_state->cac_tdp_table);
>>> +	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
>>> +	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
>>> +	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
>>> +	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
>>> +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
>>> +}
>>> +
>>> +static const char *pp_lib_thermal_controller_names[] = {
>>> +	"NONE",
>>> +	"lm63",
>>> +	"adm1032",
>>> +	"adm1030",
>>> +	"max6649",
>>> +	"lm64",
>>> +	"f75375",
>>> +	"RV6xx",
>>> +	"RV770",
>>> +	"adt7473",
>>> +	"NONE",
>>> +	"External GPIO",
>>> +	"Evergreen",
>>> +	"emc2103",
>>> +	"Sumo",
>>> +	"Northern Islands",
>>> +	"Southern Islands",
>>> +	"lm96163",
>>> +	"Sea Islands",
>>> +	"Kaveri/Kabini",
>>> +};
>>> +
>>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
>>> +{
>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>> +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>> +	ATOM_PPLIB_THERMALCONTROLLER *controller;
>>> +	struct amdgpu_i2c_bus_rec i2c_bus;
>>> +	u16 data_offset;
>>> +	u8 frev, crev;
>>> +
>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>> index, NULL,
>>> +				   &frev, &crev, &data_offset))
>>> +		return;
>>> +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
>>> +		(mode_info->atom_context->bios + data_offset);
>>> +	controller = &power_table->sThermalController;
>>> +
>>> +	/* add the i2c bus for thermal/fan chip */
>>> +	if (controller->ucType > 0) {
>>> +		if (controller->ucFanParameters &
>> ATOM_PP_FANPARAMETERS_NOFAN)
>>> +			adev->pm.no_fan = true;
>>> +		adev->pm.fan_pulses_per_revolution =
>>> +			controller->ucFanParameters &
>> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
>> ASK;
>>> +		if (adev->pm.fan_pulses_per_revolution) {
>>> +			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
>>> +			adev->pm.fan_max_rpm = controller-
>>> ucFanMaxRPM;
>>> +		}
>>> +		if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_RV6XX;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_RV770) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_RV770;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EVERGREEN;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_SUMO) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_SUMO;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
>>> +			DRM_INFO("Internal thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
>>> +		} else if (controller->ucType ==
>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
>>> +			DRM_INFO("External GPIO thermal controller %s fan
>> control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EXTERNAL_GPIO;
>>> +		} else if (controller->ucType ==
>>> +
>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
>>> +			DRM_INFO("ADT7473 with internal thermal
>> controller %s fan control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
>>> +		} else if (controller->ucType ==
>>> +
>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
>>> +			DRM_INFO("EMC2103 with internal thermal
>> controller %s fan control\n",
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
>>> +		} else if (controller->ucType <
>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
>>> +			DRM_INFO("Possible %s thermal controller at
>> 0x%02x %s fan control\n",
>>> +
>> pp_lib_thermal_controller_names[controller->ucType],
>>> +				 controller->ucI2cAddress >> 1,
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +			adev->pm.int_thermal_type =
>> THERMAL_TYPE_EXTERNAL;
>>> +			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
>> controller->ucI2cLine);
>>> +			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
>> &i2c_bus);
>>> +			if (adev->pm.i2c_bus) {
>>> +				struct i2c_board_info info = { };
>>> +				const char *name =
>> pp_lib_thermal_controller_names[controller->ucType];
>>> +				info.addr = controller->ucI2cAddress >> 1;
>>> +				strlcpy(info.type, name, sizeof(info.type));
>>> +				i2c_new_client_device(&adev->pm.i2c_bus-
>>> adapter, &info);
>>> +			}
>>> +		} else {
>>> +			DRM_INFO("Unknown thermal controller type %d at
>> 0x%02x %s fan control\n",
>>> +				 controller->ucType,
>>> +				 controller->ucI2cAddress >> 1,
>>> +				 (controller->ucFanParameters &
>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>> "without" : "with");
>>> +		}
>>> +	}
>>> +}
>>> +
>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32
>> idx)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	if (idx < adev->pm.dpm.num_of_vce_states)
>>> +		return &adev->pm.dpm.vce_states[idx];
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
>> amdgpu_device *adev,
>>> +						     enum
>> amd_pm_state_type dpm_state)
>>> +{
>>> +	int i;
>>> +	struct amdgpu_ps *ps;
>>> +	u32 ui_class;
>>> +	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
>>> +		true : false;
>>> +
>>> +	/* check if the vblank period is too short to adjust the mclk */
>>> +	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
>> {
>>> +		if (amdgpu_dpm_vblank_too_short(adev))
>>> +			single_display = false;
>>> +	}
>>> +
>>> +	/* certain older asics have a separare 3D performance state,
>>> +	 * so try that first if the user selected performance
>>> +	 */
>>> +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
>>> +	/* balanced states don't exist at the moment */
>>> +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> +
>>> +restart_search:
>>> +	/* Pick the best power state based on current conditions */
>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
>>> +		ps = &adev->pm.dpm.ps[i];
>>> +		ui_class = ps->class &
>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
>>> +		switch (dpm_state) {
>>> +		/* user states */
>>> +		case POWER_STATE_TYPE_BATTERY:
>>> +			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
>>> +				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> +					if (single_display)
>>> +						return ps;
>>> +				} else
>>> +					return ps;
>>> +			}
>>> +			break;
>>> +		case POWER_STATE_TYPE_BALANCED:
>>> +			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
>>> +				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> +					if (single_display)
>>> +						return ps;
>>> +				} else
>>> +					return ps;
>>> +			}
>>> +			break;
>>> +		case POWER_STATE_TYPE_PERFORMANCE:
>>> +			if (ui_class ==
>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
>>> +				if (ps->caps &
>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>> +					if (single_display)
>>> +						return ps;
>>> +				} else
>>> +					return ps;
>>> +			}
>>> +			break;
>>> +		/* internal states */
>>> +		case POWER_STATE_TYPE_INTERNAL_UVD:
>>> +			if (adev->pm.dpm.uvd_ps)
>>> +				return adev->pm.dpm.uvd_ps;
>>> +			else
>>> +				break;
>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>> +			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>> +			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>> +			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>> +			if (ps->class2 &
>> ATOM_PPLIB_CLASSIFICATION2_MVC)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_BOOT:
>>> +			return adev->pm.dpm.boot_ps;
>>> +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>> +			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_ACPI:
>>> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_ULV:
>>> +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>> +				return ps;
>>> +			break;
>>> +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>> +			if (ps->class &
>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>> +				return ps;
>>> +			break;
>>> +		default:
>>> +			break;
>>> +		}
>>> +	}
>>> +	/* use a fallback state if we didn't match */
>>> +	switch (dpm_state) {
>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
>>> +		goto restart_search;
>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>> +		if (adev->pm.dpm.uvd_ps) {
>>> +			return adev->pm.dpm.uvd_ps;
>>> +		} else {
>>> +			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> +			goto restart_search;
>>> +		}
>>> +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
>>> +		goto restart_search;
>>> +	case POWER_STATE_TYPE_INTERNAL_ACPI:
>>> +		dpm_state = POWER_STATE_TYPE_BATTERY;
>>> +		goto restart_search;
>>> +	case POWER_STATE_TYPE_BATTERY:
>>> +	case POWER_STATE_TYPE_BALANCED:
>>> +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>> +		goto restart_search;
>>> +	default:
>>> +		break;
>>> +	}
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +int amdgpu_dpm_change_power_state_locked(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	struct amdgpu_ps *ps;
>>> +	enum amd_pm_state_type dpm_state;
>>> +	int ret;
>>> +	bool equal = false;
>>> +
>>> +	/* if dpm init failed */
>>> +	if (!adev->pm.dpm_enabled)
>>> +		return 0;
>>> +
>>> +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
>>> +		/* add other state override checks here */
>>> +		if ((!adev->pm.dpm.thermal_active) &&
>>> +		    (!adev->pm.dpm.uvd_active))
>>> +			adev->pm.dpm.state = adev->pm.dpm.user_state;
>>> +	}
>>> +	dpm_state = adev->pm.dpm.state;
>>> +
>>> +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
>>> +	if (ps)
>>> +		adev->pm.dpm.requested_ps = ps;
>>> +	else
>>> +		return -EINVAL;
>>> +
>>> +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
>>> print_power_state) {
>>> +		printk("switching from power state:\n");
>>> +		amdgpu_dpm_print_power_state(adev, adev-
>>> pm.dpm.current_ps);
>>> +		printk("switching to power state:\n");
>>> +		amdgpu_dpm_print_power_state(adev, adev-
>>> pm.dpm.requested_ps);
>>> +	}
>>> +
>>> +	/* update whether vce is active */
>>> +	ps->vce_active = adev->pm.dpm.vce_active;
>>> +	if (adev->powerplay.pp_funcs->display_configuration_changed)
>>> +		amdgpu_dpm_display_configuration_changed(adev);
>>> +
>>> +	ret = amdgpu_dpm_pre_set_power_state(adev);
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	if (adev->powerplay.pp_funcs->check_state_equal) {
>>> +		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
>>> +			equal = false;
>>> +	}
>>> +
>>> +	if (equal)
>>> +		return 0;
>>> +
>>> +	if (adev->powerplay.pp_funcs->set_power_state)
>>> +		adev->powerplay.pp_funcs->set_power_state(adev-
>>> powerplay.pp_handle);
>>> +
>>> +	amdgpu_dpm_post_set_power_state(adev);
>>> +
>>> +	adev->pm.dpm.current_active_crtcs = adev-
>>> pm.dpm.new_active_crtcs;
>>> +	adev->pm.dpm.current_active_crtc_count = adev-
>>> pm.dpm.new_active_crtc_count;
>>> +
>>> +	if (adev->powerplay.pp_funcs->force_performance_level) {
>>> +		if (adev->pm.dpm.thermal_active) {
>>> +			enum amd_dpm_forced_level level = adev-
>>> pm.dpm.forced_level;
>>> +			/* force low perf level for thermal */
>>> +			amdgpu_dpm_force_performance_level(adev,
>> AMD_DPM_FORCED_LEVEL_LOW);
>>> +			/* save the user's level */
>>> +			adev->pm.dpm.forced_level = level;
>>> +		} else {
>>> +			/* otherwise, user selected level */
>>> +			amdgpu_dpm_force_performance_level(adev,
>> adev->pm.dpm.forced_level);
>>> +		}
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>> new file mode 100644
>>> index 000000000000..4adc765c8824
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>> @@ -0,0 +1,70 @@
>>> +/*
>>> + * Copyright 2021 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#ifndef __LEGACY_DPM_H__
>>> +#define __LEGACY_DPM_H__
>>> +
>>> +int amdgpu_atombios_get_memory_pll_dividers(struct amdgpu_device
>> *adev,
>>> +					    u32 clock,
>>> +					    bool strobe_mode,
>>> +					    struct atom_mpll_param
>> *mpll_param);
>>> +
>>> +void amdgpu_atombios_set_engine_dram_timings(struct
>> amdgpu_device *adev,
>>> +					     u32 eng_clock, u32 mem_clock);
>>> +
>>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>> *adev,
>>> +					  u16 *vddc, u16 *vddci, u16 *mvdd);
>>> +
>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>> voltage_type,
>>> +			     u16 voltage_id, u16 *voltage);
>>> +
>>> +int amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>> amdgpu_device *adev,
>>> +						      u16 *voltage,
>>> +						      u16 leakage_idx);
>>> +
>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>> +			      u8 voltage_type,
>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
>>> +
>>> +bool
>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>> +				u8 voltage_type, u8 voltage_mode);
>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>> +				      u8 voltage_type, u8 voltage_mode,
>>> +				      struct atom_voltage_table
>> *voltage_table);
>>> +
>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>> +				      u8 module_index,
>>> +				      struct atom_mc_reg_table *reg_table);
>>> +
>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
>>> +void amdgpu_dpm_print_cap_info(u32 caps);
>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>> +				struct amdgpu_ps *rps);
>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
>> *adev);
>>> +void amdgpu_free_extended_power_table(struct amdgpu_device
>> *adev);
>>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle, u32
>> idx);
>>> +int amdgpu_dpm_change_power_state_locked(void *handle);
>>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>> b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>> index 4f84d8b893f1..a2881c90d187 100644
>>> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>> @@ -37,6 +37,7 @@
>>>    #include <linux/math64.h>
>>>    #include <linux/seq_file.h>
>>>    #include <linux/firmware.h>
>>> +#include <legacy_dpm.h>
>>>
>>>    #define MC_CG_ARB_FREQ_F0           0x0a
>>>    #define MC_CG_ARB_FREQ_F1           0x0b
>>> @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs si_dpm_funcs
>> = {
>>>    	.check_state_equal = &si_check_state_equal,
>>>    	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>>>    	.read_sensor = &si_dpm_read_sensor,
>>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>>>    };
>>>
>>>    static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
>>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power
  2021-11-30 13:57   ` Lazar, Lijo
@ 2021-12-01  5:39     ` Quan, Evan
  2021-12-01  6:38       ` Lazar, Lijo
  0 siblings, 1 reply; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  5:39 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Tuesday, November 30, 2021 9:58 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 13/17] drm/amd/pm: do not expose the
> smu_context structure used internally in power
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > This can cover the power implementation details. And as what did for
> > powerplay framework, we hook the smu_context to adev-
> >powerplay.pp_handle.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: I3969c9f62a8b63dc6e4321a488d8f15022ffeb3d
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  6 --
> >   .../gpu/drm/amd/include/kgd_pp_interface.h    |  9 +++
> >   drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 51 ++++++++++------
> >   drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       | 11 +---
> >   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 60
> +++++++++++++------
> >   .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  9 +--
> >   .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |  9 +--
> >   .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |  9 +--
> >   .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  4 +-
> >   .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  9 +--
> >   .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  8 +--
> >   11 files changed, 111 insertions(+), 74 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > index c987813a4996..fefabd568483 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > @@ -99,7 +99,6 @@
> >   #include "amdgpu_gem.h"
> >   #include "amdgpu_doorbell.h"
> >   #include "amdgpu_amdkfd.h"
> > -#include "amdgpu_smu.h"
> >   #include "amdgpu_discovery.h"
> >   #include "amdgpu_mes.h"
> >   #include "amdgpu_umc.h"
> > @@ -950,11 +949,6 @@ struct amdgpu_device {
> >
> >   	/* powerplay */
> >   	struct amd_powerplay		powerplay;
> > -
> > -	/* smu */
> > -	struct smu_context		smu;
> > -
> > -	/* dpm */
> >   	struct amdgpu_pm		pm;
> >   	u32				cg_flags;
> >   	u32				pg_flags;
> > diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > index 7919e96e772b..da6a82430048 100644
> > --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> > @@ -25,6 +25,9 @@
> >   #define __KGD_PP_INTERFACE_H__
> >
> >   extern const struct amdgpu_ip_block_version pp_smu_ip_block;
> > +extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> > +extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> > +extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
> >
> >   enum smu_event_type {
> >   	SMU_EVENT_RESET_COMPLETE = 0,
> > @@ -244,6 +247,12 @@ enum pp_power_type
> >   	PP_PWR_TYPE_FAST,
> >   };
> >
> > +enum smu_ppt_limit_type
> > +{
> > +	SMU_DEFAULT_PPT_LIMIT = 0,
> > +	SMU_FAST_PPT_LIMIT,
> > +};
> > +
> 
> This is a contradiction. If the entry point is dpm, this shouldn't be here and
> the external interface doesn't need to know about internal datatypes.
[Quan, Evan] This is needed by amdgpu_hwmon_show_power_label() from amdgpu_pm.c. 
So, it has to be put into some place which can be accessed from outside(of power).
Then kgd_pp_interface.h is the right place.

> 
> >   #define PP_GROUP_MASK        0xF0000000
> >   #define PP_GROUP_SHIFT       28
> >
> > diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > index 8f0ae58f4292..a5cbbf9367fe 100644
> > --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> > @@ -31,6 +31,7 @@
> >   #include "amdgpu_display.h"
> >   #include "hwmgr.h"
> >   #include <linux/power_supply.h>
> > +#include "amdgpu_smu.h"
> >
> >   #define amdgpu_dpm_enable_bapm(adev, e) \
> >
> > ((adev)->powerplay.pp_funcs->enable_bapm((adev)-
> >powerplay.pp_handle,
> > (e))) @@ -213,7 +214,7 @@ int amdgpu_dpm_baco_reset(struct
> > amdgpu_device *adev)
> >
> >   bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device
> *adev)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (is_support_sw_smu(adev))
> >   		return smu_mode1_reset_is_support(smu); @@ -223,7
> +224,7 @@ bool
> > amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
> >
> >   int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (is_support_sw_smu(adev))
> >   		return smu_mode1_reset(smu);
> > @@ -276,7 +277,7 @@ int amdgpu_dpm_set_df_cstate(struct
> amdgpu_device
> > *adev,
> >
> >   int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device
> *adev, bool en)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (is_support_sw_smu(adev))
> >   		return smu_allow_xgmi_power_down(smu, en); @@ -341,7
> +342,7 @@
> > void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
> >   		mutex_unlock(&adev->pm.mutex);
> >
> >   		if (is_support_sw_smu(adev))
> > -			smu_set_ac_dc(&adev->smu);
> > +			smu_set_ac_dc(adev->powerplay.pp_handle);
> >   	}
> >   }
> >
> > @@ -423,15 +424,16 @@ int amdgpu_pm_load_smu_firmware(struct
> > amdgpu_device *adev, uint32_t *smu_versio
> >
> >   int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> enable)
> >   {
> > -	return smu_set_light_sbr(&adev->smu, enable);
> > +	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
> >   }
> >
> >   int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> *adev, uint32_t size)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	int ret = 0;
> >
> > -	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num)
> > -		ret = adev->smu.ppt_funcs-
> >send_hbm_bad_pages_num(&adev->smu, size);
> > +	if (is_support_sw_smu(adev))
> > +		ret = smu_send_hbm_bad_pages_num(smu, size);
> >
> >   	return ret;
> >   }
> > @@ -446,7 +448,7 @@ int amdgpu_dpm_get_dpm_freq_range(struct
> > amdgpu_device *adev,
> >
> >   	switch (type) {
> >   	case PP_SCLK:
> > -		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> min, max);
> > +		return smu_get_dpm_freq_range(adev-
> >powerplay.pp_handle, SMU_SCLK,
> > +min, max);
> >   	default:
> >   		return -EINVAL;
> >   	}
> > @@ -457,12 +459,14 @@ int amdgpu_dpm_set_soft_freq_range(struct
> amdgpu_device *adev,
> >   				   uint32_t min,
> >   				   uint32_t max)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> >   	if (!is_support_sw_smu(adev))
> >   		return -EOPNOTSUPP;
> >
> >   	switch (type) {
> >   	case PP_SCLK:
> > -		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> min, max);
> > +		return smu_set_soft_freq_range(smu, SMU_SCLK, min,
> max);
> >   	default:
> >   		return -EINVAL;
> >   	}
> > @@ -470,33 +474,41 @@ int amdgpu_dpm_set_soft_freq_range(struct
> > amdgpu_device *adev,
> >
> >   int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> >   	if (!is_support_sw_smu(adev))
> >   		return 0;
> >
> > -	return smu_write_watermarks_table(&adev->smu);
> > +	return smu_write_watermarks_table(smu);
> >   }
> >
> >   int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
> >   			      enum smu_event_type event,
> >   			      uint64_t event_arg)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> >   	if (!is_support_sw_smu(adev))
> >   		return -EOPNOTSUPP;
> >
> > -	return smu_wait_for_event(&adev->smu, event, event_arg);
> > +	return smu_wait_for_event(smu, event, event_arg);
> >   }
> >
> >   int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> uint32_t *value)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> >   	if (!is_support_sw_smu(adev))
> >   		return -EOPNOTSUPP;
> >
> > -	return smu_get_status_gfxoff(&adev->smu, value);
> > +	return smu_get_status_gfxoff(smu, value);
> >   }
> >
> >   uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
> amdgpu_device *adev)
> >   {
> > -	return atomic64_read(&adev->smu.throttle_int_counter);
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> > +	return atomic64_read(&smu->throttle_int_counter);
> >   }
> >
> >   /* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
> > @@ -518,10 +530,12 @@ void amdgpu_dpm_gfx_state_change(struct
> amdgpu_device *adev,
> >   int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> >   			    void *umc_ecc)
> >   {
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> >   	if (!is_support_sw_smu(adev))
> >   		return -EOPNOTSUPP;
> >
> > -	return smu_get_ecc_info(&adev->smu, umc_ecc);
> > +	return smu_get_ecc_info(smu, umc_ecc);
> >   }
> >
> >   struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct
> > amdgpu_device *adev, @@ -919,9 +933,10 @@ int
> amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
> >   int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
> >   {
> >   	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> > -	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
> > -	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
> > +	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
> > +	    (is_support_sw_smu(adev) && smu->is_apu) ||
> >   		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
> >   		return true;
> >
> > @@ -944,7 +959,9 @@ int amdgpu_dpm_set_pp_table(struct
> amdgpu_device
> > *adev,
> >
> >   int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
> >   {
> > -	return adev->smu.cpu_core_num;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> > +	return smu->cpu_core_num;
> >   }
> >
> >   void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev) diff
> > --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > index 29791bb21fba..f44139b415b4 100644
> > --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > @@ -205,12 +205,6 @@ enum smu_power_src_type
> >   	SMU_POWER_SOURCE_COUNT,
> >   };
> >
> > -enum smu_ppt_limit_type
> > -{
> > -	SMU_DEFAULT_PPT_LIMIT = 0,
> > -	SMU_FAST_PPT_LIMIT,
> > -};
> > -
> >   enum smu_ppt_limit_level
> >   {
> >   	SMU_PPT_LIMIT_MIN = -1,
> > @@ -1389,10 +1383,6 @@ int smu_mode1_reset(struct smu_context
> *smu);
> >
> >   extern const struct amd_ip_funcs smu_ip_funcs;
> >
> > -extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> > -extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> > -extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
> > -
> >   bool is_support_sw_smu(struct amdgpu_device *adev);
> >   bool is_support_cclk_dpm(struct amdgpu_device *adev);
> >   int smu_write_watermarks_table(struct smu_context *smu); @@ -1416,6
> > +1406,7 @@ int smu_wait_for_event(struct smu_context *smu, enum
> smu_event_type event,
> >   int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
> >   int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t
> size);
> >   void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev);
> > +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t
> > +size);
> >
> >   #endif
> >   #endif
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > index eaed5aba7547..2c3fd3cfef05 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > @@ -468,7 +468,7 @@ bool is_support_sw_smu(struct amdgpu_device
> *adev)
> >
> >   bool is_support_cclk_dpm(struct amdgpu_device *adev)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
> >   		return false;
> > @@ -572,7 +572,7 @@ static int
> > smu_get_driver_allowed_feature_mask(struct smu_context *smu)
> >
> >   static int smu_set_funcs(struct amdgpu_device *adev)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
> >   		smu->od_enabled = true;
> > @@ -624,7 +624,11 @@ static int smu_set_funcs(struct amdgpu_device
> *adev)
> >   static int smu_early_init(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu;
> > +
> > +	smu = kzalloc(sizeof(struct smu_context), GFP_KERNEL);
> > +	if (!smu)
> > +		return -ENOMEM;
> >
> >   	smu->adev = adev;
> >   	smu->pm_enabled = !!amdgpu_dpm;
> > @@ -684,7 +688,7 @@ static int smu_set_default_dpm_table(struct
> smu_context *smu)
> >   static int smu_late_init(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	int ret = 0;
> >
> >   	smu_set_fine_grain_gfx_freq_parameters(smu);
> > @@ -730,7 +734,7 @@ static int smu_late_init(void *handle)
> >
> >   	smu_get_fan_parameters(smu);
> >
> > -	smu_handle_task(&adev->smu,
> > +	smu_handle_task(smu,
> >   			smu->smu_dpm.dpm_level,
> >   			AMD_PP_TASK_COMPLETE_INIT,
> >   			false);
> > @@ -1020,7 +1024,7 @@ static void smu_interrupt_work_fn(struct
> work_struct *work)
> >   static int smu_sw_init(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	int ret;
> >
> >   	smu->pool_size = adev->pm.smu_prv_buffer_size; @@ -1095,7
> +1099,7
> > @@ static int smu_sw_init(void *handle)
> >   static int smu_sw_fini(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	int ret;
> >
> >   	ret = smu_smc_table_sw_fini(smu);
> > @@ -1330,7 +1334,7 @@ static int smu_hw_init(void *handle)
> >   {
> >   	int ret;
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
> {
> >   		smu->pm_enabled = false;
> > @@ -1344,10 +1348,10 @@ static int smu_hw_init(void *handle)
> >   	}
> >
> >   	if (smu->is_apu) {
> > -		smu_powergate_sdma(&adev->smu, false);
> > +		smu_powergate_sdma(smu, false);
> >   		smu_dpm_set_vcn_enable(smu, true);
> >   		smu_dpm_set_jpeg_enable(smu, true);
> > -		smu_set_gfx_cgpg(&adev->smu, true);
> > +		smu_set_gfx_cgpg(smu, true);
> >   	}
> >
> >   	if (!smu->pm_enabled)
> > @@ -1501,13 +1505,13 @@ static int smu_smc_hw_cleanup(struct
> smu_context *smu)
> >   static int smu_hw_fini(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> >   		return 0;
> >
> >   	if (smu->is_apu) {
> > -		smu_powergate_sdma(&adev->smu, true);
> > +		smu_powergate_sdma(smu, true);
> >   	}
> >
> >   	smu_dpm_set_vcn_enable(smu, false); @@ -1524,6 +1528,14 @@
> static
> > int smu_hw_fini(void *handle)
> >   	return smu_smc_hw_cleanup(smu);
> >   }
> >
> > +static void smu_late_fini(void *handle) {
> > +	struct amdgpu_device *adev = handle;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +
> > +	kfree(smu);
> > +}
> > +
> 
> This doesn't look related to this change.
[Quan, Evan] "smu" is updated as dynamically allocated. We need to find a place to get it freed.
As did in powerplay framework, ->late_fini is the right place.
> 
> >   static int smu_reset(struct smu_context *smu)
> >   {
> >   	struct amdgpu_device *adev = smu->adev; @@ -1551,7 +1563,7 @@
> > static int smu_reset(struct smu_context *smu)
> >   static int smu_suspend(void *handle)
> >   {
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	int ret;
> >
> >   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> @@
> > -1570,7 +1582,7 @@ static int smu_suspend(void *handle)
> >
> >   	/* skip CGPG when in S0ix */
> >   	if (smu->is_apu && !adev->in_s0ix)
> > -		smu_set_gfx_cgpg(&adev->smu, false);
> > +		smu_set_gfx_cgpg(smu, false);
> >
> >   	return 0;
> >   }
> > @@ -1579,7 +1591,7 @@ static int smu_resume(void *handle)
> >   {
> >   	int ret;
> >   	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> >   		return 0;
> > @@ -1602,7 +1614,7 @@ static int smu_resume(void *handle)
> >   	}
> >
> >   	if (smu->is_apu)
> > -		smu_set_gfx_cgpg(&adev->smu, true);
> > +		smu_set_gfx_cgpg(smu, true);
> >
> >   	smu->disable_uclk_switch = 0;
> >
> > @@ -2134,6 +2146,7 @@ const struct amd_ip_funcs smu_ip_funcs = {
> >   	.sw_fini = smu_sw_fini,
> >   	.hw_init = smu_hw_init,
> >   	.hw_fini = smu_hw_fini,
> > +	.late_fini = smu_late_fini,
> >   	.suspend = smu_suspend,
> >   	.resume = smu_resume,
> >   	.is_idle = NULL,
> > @@ -3198,7 +3211,7 @@ int smu_stb_collect_info(struct smu_context
> *smu, void *buf, uint32_t size)
> >   static int smu_stb_debugfs_open(struct inode *inode, struct file *filp)
> >   {
> >   	struct amdgpu_device *adev = filp->f_inode->i_private;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	unsigned char *buf;
> >   	int r;
> >
> > @@ -3223,7 +3236,7 @@ static ssize_t smu_stb_debugfs_read(struct file
> *filp, char __user *buf, size_t
> >   				loff_t *pos)
> >   {
> >   	struct amdgpu_device *adev = filp->f_inode->i_private;
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >
> >   	if (!filp->private_data)
> > @@ -3264,7 +3277,7 @@ void amdgpu_smu_stb_debug_fs_init(struct
> amdgpu_device *adev)
> >   {
> >   #if defined(CONFIG_DEBUG_FS)
> >
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >
> >   	if (!smu->stb_context.stb_buf_size)
> >   		return;
> > @@ -3276,5 +3289,14 @@ void amdgpu_smu_stb_debug_fs_init(struct
> amdgpu_device *adev)
> >   			    &smu_stb_debugfs_fops,
> >   			    smu->stb_context.stb_buf_size);
> >   #endif
> > +}
> > +
> > +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t
> > +size) {
> > +	int ret = 0;
> > +
> > +	if (smu->ppt_funcs->send_hbm_bad_pages_num)
> > +		ret = smu->ppt_funcs->send_hbm_bad_pages_num(smu,
> size);
> >
> > +	return ret;
> 
> This also looks unrelated.
[Quan, Evan] This was moved from amdgpu_dpm.c to here (amdgpu_smu.c).
As smu_context is now an internal data structure for swsmu framework.
Then the accessing for smu->ppt_funcs should be launched from amdgpu_smu.c.

BR
Evan
> 
> Thanks,
> Lijo
> 
> >   }
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > index 05defeee0c87..a03bbd2a7aa0 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > @@ -2082,7 +2082,8 @@ static int arcturus_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			     struct i2c_msg *msg, int num_msgs)
> >   {
> >   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> > -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +	struct smu_table_context *smu_table = &smu->smu_table;
> >   	struct smu_table *table = &smu_table->driver_table;
> >   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >   	int i, j, r, c;
> > @@ -2128,9 +2129,9 @@ static int arcturus_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			}
> >   		}
> >   	}
> > -	mutex_lock(&adev->smu.mutex);
> > -	r = smu_cmn_update_table(&adev->smu,
> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> > -	mutex_unlock(&adev->smu.mutex);
> > +	mutex_lock(&smu->mutex);
> > +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> req, true);
> > +	mutex_unlock(&smu->mutex);
> >   	if (r)
> >   		goto fail;
> >
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> > index 2bb7816b245a..37e11716e919 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> > @@ -2779,7 +2779,8 @@ static int navi10_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			   struct i2c_msg *msg, int num_msgs)
> >   {
> >   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> > -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +	struct smu_table_context *smu_table = &smu->smu_table;
> >   	struct smu_table *table = &smu_table->driver_table;
> >   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >   	int i, j, r, c;
> > @@ -2825,9 +2826,9 @@ static int navi10_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			}
> >   		}
> >   	}
> > -	mutex_lock(&adev->smu.mutex);
> > -	r = smu_cmn_update_table(&adev->smu,
> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> > -	mutex_unlock(&adev->smu.mutex);
> > +	mutex_lock(&smu->mutex);
> > +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> req, true);
> > +	mutex_unlock(&smu->mutex);
> >   	if (r)
> >   		goto fail;
> >
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > index 777f717c37ae..6a5064f4ea86 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> > @@ -3459,7 +3459,8 @@ static int sienna_cichlid_i2c_xfer(struct
> i2c_adapter *i2c_adap,
> >   				   struct i2c_msg *msg, int num_msgs)
> >   {
> >   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> > -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +	struct smu_table_context *smu_table = &smu->smu_table;
> >   	struct smu_table *table = &smu_table->driver_table;
> >   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >   	int i, j, r, c;
> > @@ -3505,9 +3506,9 @@ static int sienna_cichlid_i2c_xfer(struct
> i2c_adapter *i2c_adap,
> >   			}
> >   		}
> >   	}
> > -	mutex_lock(&adev->smu.mutex);
> > -	r = smu_cmn_update_table(&adev->smu,
> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> > -	mutex_unlock(&adev->smu.mutex);
> > +	mutex_lock(&smu->mutex);
> > +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> req, true);
> > +	mutex_unlock(&smu->mutex);
> >   	if (r)
> >   		goto fail;
> >
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> > index 28b7c0562b99..2a53b5b1d261 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> > @@ -1372,7 +1372,7 @@ static int smu_v11_0_set_irq_state(struct
> amdgpu_device *adev,
> >   				   unsigned tyep,
> >   				   enum amdgpu_interrupt_state state)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	uint32_t low, high;
> >   	uint32_t val = 0;
> >
> > @@ -1441,7 +1441,7 @@ static int smu_v11_0_irq_process(struct
> amdgpu_device *adev,
> >   				 struct amdgpu_irq_src *source,
> >   				 struct amdgpu_iv_entry *entry)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	uint32_t client_id = entry->client_id;
> >   	uint32_t src_id = entry->src_id;
> >   	/*
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > index 6e781cee8bb6..3c82f5455f88 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > @@ -1484,7 +1484,8 @@ static int aldebaran_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			      struct i2c_msg *msg, int num_msgs)
> >   {
> >   	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> > -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> > +	struct smu_table_context *smu_table = &smu->smu_table;
> >   	struct smu_table *table = &smu_table->driver_table;
> >   	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >   	int i, j, r, c;
> > @@ -1530,9 +1531,9 @@ static int aldebaran_i2c_xfer(struct i2c_adapter
> *i2c_adap,
> >   			}
> >   		}
> >   	}
> > -	mutex_lock(&adev->smu.mutex);
> > -	r = smu_cmn_update_table(&adev->smu,
> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> > -	mutex_unlock(&adev->smu.mutex);
> > +	mutex_lock(&smu->mutex);
> > +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> req, true);
> > +	mutex_unlock(&smu->mutex);
> >   	if (r)
> >   		goto fail;
> >
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > index 55421ea622fb..4ed01e9d88fb 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > @@ -1195,7 +1195,7 @@ static int smu_v13_0_set_irq_state(struct
> amdgpu_device *adev,
> >   				   unsigned tyep,
> >   				   enum amdgpu_interrupt_state state)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	uint32_t low, high;
> >   	uint32_t val = 0;
> >
> > @@ -1270,7 +1270,7 @@ static int smu_v13_0_irq_process(struct
> amdgpu_device *adev,
> >   				 struct amdgpu_irq_src *source,
> >   				 struct amdgpu_iv_entry *entry)
> >   {
> > -	struct smu_context *smu = &adev->smu;
> > +	struct smu_context *smu = adev->powerplay.pp_handle;
> >   	uint32_t client_id = entry->client_id;
> >   	uint32_t src_id = entry->src_id;
> >   	/*
> > @@ -1316,11 +1316,11 @@ static int smu_v13_0_irq_process(struct
> amdgpu_device *adev,
> >   			switch (ctxid) {
> >   			case 0x3:
> >   				dev_dbg(adev->dev, "Switched to AC
> mode!\n");
> > -				smu_v13_0_ack_ac_dc_interrupt(&adev-
> >smu);
> > +				smu_v13_0_ack_ac_dc_interrupt(smu);
> >   				break;
> >   			case 0x4:
> >   				dev_dbg(adev->dev, "Switched to DC
> mode!\n");
> > -				smu_v13_0_ack_ac_dc_interrupt(&adev-
> >smu);
> > +				smu_v13_0_ack_ac_dc_interrupt(smu);
> >   				break;
> >   			case 0x7:
> >   				/*
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 14/17] drm/amd/pm: relocate the power related headers
  2021-11-30 14:07   ` Lazar, Lijo
@ 2021-12-01  6:22     ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  6:22 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Tuesday, November 30, 2021 10:07 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 14/17] drm/amd/pm: relocate the power related
> headers
> 
> 
> 
> On 11/30/2021 1:12 PM, Evan Quan wrote:
> > Instead of centralizing all headers in the same folder. Separate them
> > into different folders and place them among those source files those
> > who really need them.
> >
> > Signed-off-by: Evan Quan <evan.quan@amd.com>
> > Change-Id: Id74cb4c7006327ca7ecd22daf17321e417c4aa71
> > ---
> >   drivers/gpu/drm/amd/pm/Makefile               | 10 +++---
> >   drivers/gpu/drm/amd/pm/legacy-dpm/Makefile    | 32
> +++++++++++++++++++
> >   .../pm/{powerplay => legacy-dpm}/cik_dpm.h    |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.c |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/kv_dpm.h |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/kv_smc.c |  0
> >   .../pm/{powerplay => legacy-dpm}/legacy_dpm.c |  0
> >   .../pm/{powerplay => legacy-dpm}/legacy_dpm.h |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/ppsmc.h  |  0
> >   .../pm/{powerplay => legacy-dpm}/r600_dpm.h   |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.c |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/si_dpm.h |  0
> >   .../amd/pm/{powerplay => legacy-dpm}/si_smc.c |  0
> >   .../{powerplay => legacy-dpm}/sislands_smc.h  |  0
> >   drivers/gpu/drm/amd/pm/powerplay/Makefile     |  6 +---
> >   .../pm/{ => powerplay}/inc/amd_powerplay.h    |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h |  0
> >   .../amd/pm/{ => powerplay}/inc/fiji_ppsmc.h   |  0
> >   .../pm/{ => powerplay}/inc/hardwaremanager.h  |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/hwmgr.h    |  0
> >   .../{ => powerplay}/inc/polaris10_pwrvirus.h  |  0
> >   .../amd/pm/{ => powerplay}/inc/power_state.h  |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/pp_debug.h |  0
> >   .../amd/pm/{ => powerplay}/inc/pp_endian.h    |  0
> >   .../amd/pm/{ => powerplay}/inc/pp_thermal.h   |  0
> >   .../amd/pm/{ => powerplay}/inc/ppinterrupt.h  |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu10.h    |  0
> >   .../pm/{ => powerplay}/inc/smu10_driver_if.h  |  0
> >   .../pm/{ => powerplay}/inc/smu11_driver_if.h  |  0
> >   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu71.h    |  0
> >   .../pm/{ => powerplay}/inc/smu71_discrete.h   |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu72.h    |  0
> >   .../pm/{ => powerplay}/inc/smu72_discrete.h   |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu73.h    |  0
> >   .../pm/{ => powerplay}/inc/smu73_discrete.h   |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu74.h    |  0
> >   .../pm/{ => powerplay}/inc/smu74_discrete.h   |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smu75.h    |  0
> >   .../pm/{ => powerplay}/inc/smu75_discrete.h   |  0
> >   .../amd/pm/{ => powerplay}/inc/smu7_common.h  |  0
> >   .../pm/{ => powerplay}/inc/smu7_discrete.h    |  0
> >   .../amd/pm/{ => powerplay}/inc/smu7_fusion.h  |  0
> >   .../amd/pm/{ => powerplay}/inc/smu7_ppsmc.h   |  0
> >   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h |  0
> >   .../amd/pm/{ => powerplay}/inc/smu8_fusion.h  |  0
> >   .../gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h |  0
> >   .../pm/{ => powerplay}/inc/smu9_driver_if.h   |  0
> >   .../{ => powerplay}/inc/smu_ucode_xfer_cz.h   |  0
> >   .../{ => powerplay}/inc/smu_ucode_xfer_vi.h   |  0
> >   .../drm/amd/pm/{ => powerplay}/inc/smumgr.h   |  0
> >   .../amd/pm/{ => powerplay}/inc/tonga_ppsmc.h  |  0
> >   .../amd/pm/{ => powerplay}/inc/vega10_ppsmc.h |  0
> >   .../inc/vega12/smu9_driver_if.h               |  0
> >   .../amd/pm/{ => powerplay}/inc/vega12_ppsmc.h |  0
> >   .../amd/pm/{ => powerplay}/inc/vega20_ppsmc.h |  0
> >   .../amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h  |  0
> >   .../drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h   |  0
> >   .../amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h   |  0
> >   .../inc/smu11_driver_if_arcturus.h            |  0
> >   .../inc/smu11_driver_if_cyan_skillfish.h      |  0
> >   .../{ => swsmu}/inc/smu11_driver_if_navi10.h  |  0
> >   .../inc/smu11_driver_if_sienna_cichlid.h      |  0
> >   .../{ => swsmu}/inc/smu11_driver_if_vangogh.h |  0
> >   .../amd/pm/{ => swsmu}/inc/smu12_driver_if.h  |  0
> >   .../inc/smu13_driver_if_aldebaran.h           |  0
> >   .../inc/smu13_driver_if_yellow_carp.h         |  0
> >   .../pm/{ => swsmu}/inc/smu_11_0_cdr_table.h   |  0
> >   .../drm/amd/pm/{ => swsmu}/inc/smu_types.h    |  0
> >   .../drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h    |  0
> >   .../pm/{ => swsmu}/inc/smu_v11_0_7_ppsmc.h    |  0
> >   .../pm/{ => swsmu}/inc/smu_v11_0_7_pptable.h  |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h  |  0
> >   .../pm/{ => swsmu}/inc/smu_v11_0_pptable.h    |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h   |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h  |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h   |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h  |  0
> >   .../drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h    |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h  |  0
> >   .../drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h    |  0
> >   .../amd/pm/{ => swsmu}/inc/smu_v13_0_1_pmfw.h |  0
> >   .../pm/{ => swsmu}/inc/smu_v13_0_1_ppsmc.h    |  0
> >   .../pm/{ => swsmu}/inc/smu_v13_0_pptable.h    |  0
> >   .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  1 -
> >   .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  1 -
> >   87 files changed, 39 insertions(+), 11 deletions(-)
> >   create mode 100644 drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/cik_dpm.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_dpm.c
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/kv_dpm.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/kv_smc.c
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/legacy_dpm.c (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/legacy_dpm.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/r600_dpm.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.c
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_dpm.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-dpm}/si_smc.c
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{powerplay => legacy-
> dpm}/sislands_smc.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/amd_powerplay.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/cz_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/fiji_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/hardwaremanager.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/hwmgr.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/polaris10_pwrvirus.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/power_state.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_debug.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_endian.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/pp_thermal.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/ppinterrupt.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/rv_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu10.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/smu10_driver_if.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/smu11_driver_if.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu71_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu72_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu73_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu74_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu75_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_common.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_discrete.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_fusion.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu7_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu8_fusion.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smu9_driver_if.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/smu_ucode_xfer_cz.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/smu_ucode_xfer_vi.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/smumgr.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/tonga_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega10_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> powerplay}/inc/vega12/smu9_driver_if.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega12_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => powerplay}/inc/vega20_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/aldebaran_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/amdgpu_smu.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/arcturus_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu11_driver_if_arcturus.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu11_driver_if_cyan_skillfish.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu11_driver_if_navi10.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu11_driver_if_sienna_cichlid.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu11_driver_if_vangogh.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu12_driver_if.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu13_driver_if_aldebaran.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu13_driver_if_yellow_carp.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_11_0_cdr_table.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_types.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v11_0_7_ppsmc.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v11_0_7_pptable.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_0_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v11_0_pptable.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_pmfw.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_5_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_pmfw.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v11_8_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v12_0_ppsmc.h
> (100%)
> >   rename drivers/gpu/drm/amd/pm/{ => swsmu}/inc/smu_v13_0.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v13_0_1_pmfw.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v13_0_1_ppsmc.h (100%)
> >   rename drivers/gpu/drm/amd/pm/{ =>
> swsmu}/inc/smu_v13_0_pptable.h
> > (100%)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/Makefile
> > b/drivers/gpu/drm/amd/pm/Makefile index d35ffde387f1..84c7203b5e46
> > 100644
> > --- a/drivers/gpu/drm/amd/pm/Makefile
> > +++ b/drivers/gpu/drm/amd/pm/Makefile
> > @@ -21,20 +21,22 @@
> >   #
> >
> >   subdir-ccflags-y += \
> > -		-I$(FULL_AMD_PATH)/pm/inc/  \
> >   		-I$(FULL_AMD_PATH)/include/asic_reg  \
> >   		-I$(FULL_AMD_PATH)/include  \
> > +		-I$(FULL_AMD_PATH)/pm/inc/  \
> >   		-I$(FULL_AMD_PATH)/pm/swsmu \
> > +		-I$(FULL_AMD_PATH)/pm/swsmu/inc \
> >   		-I$(FULL_AMD_PATH)/pm/swsmu/smu11 \
> >   		-I$(FULL_AMD_PATH)/pm/swsmu/smu12 \
> >   		-I$(FULL_AMD_PATH)/pm/swsmu/smu13 \
> > -		-I$(FULL_AMD_PATH)/pm/powerplay \
> > +		-I$(FULL_AMD_PATH)/pm/powerplay/inc \
> >   		-I$(FULL_AMD_PATH)/pm/powerplay/smumgr\
> > -		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr
> > +		-I$(FULL_AMD_PATH)/pm/powerplay/hwmgr \
> > +		-I$(FULL_AMD_PATH)/pm/legacy-dpm
> >
> >   AMD_PM_PATH = ../pm
> >
> > -PM_LIBS = swsmu powerplay
> > +PM_LIBS = swsmu powerplay legacy-dpm
> >
> >   AMD_PM = $(addsuffix /Makefile,$(addprefix
> > $(FULL_AMD_PATH)/pm/,$(PM_LIBS)))
> >
> > diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> > new file mode 100644
> > index 000000000000..baa4265d1daa
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/Makefile
> > @@ -0,0 +1,32 @@
> > +#
> > +# Copyright 2021 Advanced Micro Devices, Inc.
> > +#
> > +# Permission is hereby granted, free of charge, to any person
> > +obtaining a # copy of this software and associated documentation
> > +files (the "Software"), # to deal in the Software without
> > +restriction, including without limitation # the rights to use, copy,
> > +modify, merge, publish, distribute, sublicense, # and/or sell copies
> > +of the Software, and to permit persons to whom the # Software is
> furnished to do so, subject to the following conditions:
> > +#
> > +# The above copyright notice and this permission notice shall be
> > +included in # all copies or substantial portions of the Software.
> > +#
> > +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND,
> > +EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE
> WARRANTIES OF
> > +MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND
> > +NONINFRINGEMENT.  IN NO EVENT SHALL # THE COPYRIGHT HOLDER(S)
> OR
> > +AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR # OTHER LIABILITY,
> > +WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, # ARISING
> FROM,
> > +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR #
> OTHER DEALINGS IN THE SOFTWARE.
> > +#
> > +
> > +AMD_LEGACYDPM_PATH = ../pm/legacy-dpm
> > +
> > +LEGACYDPM_MGR-y = legacy_dpm.o
> > +
> > +LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o
> kv_smc.o
> > +LEGACYDPM_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
> > +
> > +AMD_LEGACYDPM_POWER = $(addprefix
> > +$(AMD_LEGACYDPM_PATH)/,$(LEGACYDPM_MGR-y))
> > +
> > +AMD_POWERPLAY_FILES += $(AMD_LEGACYDPM_POWER)
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/cik_dpm.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/cik_dpm.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/kv_dpm.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_smc.c
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/kv_smc.c
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/kv_smc.c
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/ppsmc.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/r600_dpm.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/r600_dpm.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/si_dpm.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_smc.c
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/si_smc.c
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/si_smc.c
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h
> > b/drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h
> > rename to drivers/gpu/drm/amd/pm/legacy-dpm/sislands_smc.h
> > diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > index 614d8b6a58ad..795a3624cbbf 100644
> > --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> > @@ -28,11 +28,7 @@ AMD_POWERPLAY = $(addsuffix
> /Makefile,$(addprefix
> > $(FULL_AMD_PATH)/pm/powerplay/
> >
> >   include $(AMD_POWERPLAY)
> >
> > -POWER_MGR-y = amd_powerplay.o legacy_dpm.o
> > -
> > -POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
> > -
> > -POWER_MGR-$(CONFIG_DRM_AMDGPU_SI)+= si_dpm.o si_smc.o
> > +POWER_MGR-y = amd_powerplay.o
> >
> >   AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR-y))
> >
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amd_powerplay.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/amd_powerplay.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/amd_powerplay.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/cz_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/cz_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/fiji_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/fiji_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/hardwaremanager.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/hardwaremanager.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/hardwaremanager.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/hwmgr.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/hwmgr.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/polaris10_pwrvirus.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/polaris10_pwrvirus.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/power_state.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/power_state.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/power_state.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/pp_debug.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/pp_debug.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_debug.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/pp_endian.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/pp_endian.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_endian.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/pp_thermal.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/pp_thermal.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/pp_thermal.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/ppinterrupt.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/ppinterrupt.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/ppinterrupt.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/rv_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/rv_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu10.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu10.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu10_driver_if.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu10_driver_if.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu11_driver_if.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu7.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu7.h rename to
> > drivers/gpu/drm/amd/pm/powerplay/inc/smu7.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu71.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu71.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu71_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu71_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu71_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu72.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu72.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu72_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu72_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu72_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu73.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu73.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu73_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu73_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu73_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu74.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu74.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu74_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu74_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu74_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu75.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu75.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu75_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu75_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu75_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_common.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu7_common.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_common.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_discrete.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu7_discrete.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_discrete.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_fusion.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu7_fusion.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_fusion.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu7_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu7_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu8.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu8.h rename to
> > drivers/gpu/drm/amd/pm/powerplay/inc/smu8.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu8_fusion.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu8_fusion.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu8_fusion.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu9.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu9.h rename to
> > drivers/gpu/drm/amd/pm/powerplay/inc/smu9.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu9_driver_if.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smu9_driver_if.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_cz.h
> > rename to
> drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_cz.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_ucode_xfer_vi.h
> > rename to
> drivers/gpu/drm/amd/pm/powerplay/inc/smu_ucode_xfer_vi.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smumgr.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smumgr.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/smumgr.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/tonga_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/tonga_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/vega10_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega10_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/vega12/smu9_driver_if.h
> > rename to
> drivers/gpu/drm/amd/pm/powerplay/inc/vega12/smu9_driver_if.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/vega12_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega12_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/vega20_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/powerplay/inc/vega20_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/aldebaran_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/aldebaran_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/arcturus_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/arcturus_ppsmc.h
> 
> 
> Generic comment -
> 	swsmu/inc => Only common headers
> 	smuXY/ => All specific headers
> 
> Ex: smu11/smu11_driver_if_arcturus.h
[Quan, Evan] Yes, actually I really considered this. But to be honest, I do not like to mix .c file with .h file together.
The clean code layer in my opinion is to be able to know which files you should focus on a simple 'ls' output.
Those header files mixed just get people distracted.

In fact, I even considered another layout(the perfect layout in my mind)
 swsmu/inc => common headers
 smu11/inc => specific headers  -> but this will make the power code with too many levels(image some headers file is under pm/swsmu/smu11/inc/ (!that is too many levels!)).

BR
Evan
> 
> Thanks,
> Lijo
> 
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_arcturus.h
> > rename to
> drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_arcturus.h
> > diff --git
> > a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
> > similarity index 100%
> > rename from
> > drivers/gpu/drm/amd/pm/inc/smu11_driver_if_cyan_skillfish.h
> > rename to
> > drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_cyan_skillfish.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_navi10.h
> > rename to
> drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_navi10.h
> > diff --git
> > a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
> > similarity index 100%
> > rename from
> > drivers/gpu/drm/amd/pm/inc/smu11_driver_if_sienna_cichlid.h
> > rename to
> > drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_sienna_cichlid.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu11_driver_if_vangogh.h
> > rename to
> drivers/gpu/drm/amd/pm/swsmu/inc/smu11_driver_if_vangogh.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu12_driver_if.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu12_driver_if.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu13_driver_if_aldebaran.h
> > rename to
> drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_aldebaran.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
> > similarity index 100%
> > rename from
> drivers/gpu/drm/amd/pm/inc/smu13_driver_if_yellow_carp.h
> > rename to
> > drivers/gpu/drm/amd/pm/swsmu/inc/smu13_driver_if_yellow_carp.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_11_0_cdr_table.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_11_0_cdr_table.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_types.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_types.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0.h
> 
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_7_pptable.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_7_pptable.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_0_pptable.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_0_pptable.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_pmfw.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_pmfw.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_5_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_5_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_pmfw.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_pmfw.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v11_8_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v11_8_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v12_0_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v12_0_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_pmfw.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_1_ppsmc.h
> > diff --git a/drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
> > similarity index 100%
> > rename from drivers/gpu/drm/amd/pm/inc/smu_v13_0_pptable.h
> > rename to drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0_pptable.h
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > index a03bbd2a7aa0..1e6d76657bbb 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> > @@ -33,7 +33,6 @@
> >   #include "smu11_driver_if_arcturus.h"
> >   #include "soc15_common.h"
> >   #include "atom.h"
> > -#include "power_state.h"
> >   #include "arcturus_ppt.h"
> >   #include "smu_v11_0_pptable.h"
> >   #include "arcturus_ppsmc.h"
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > index 3c82f5455f88..cc502a35f9ef 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> > @@ -33,7 +33,6 @@
> >   #include "smu13_driver_if_aldebaran.h"
> >   #include "soc15_common.h"
> >   #include "atom.h"
> > -#include "power_state.h"
> >   #include "aldebaran_ppt.h"
> >   #include "smu_v13_0_pptable.h"
> >   #include "aldebaran_ppsmc.h"
> >

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power
  2021-12-01  5:39     ` Quan, Evan
@ 2021-12-01  6:38       ` Lazar, Lijo
  2021-12-01  7:24         ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-12-01  6:38 UTC (permalink / raw)
  To: Quan, Evan, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian



On 12/1/2021 11:09 AM, Quan, Evan wrote:
> [AMD Official Use Only]
> 
> 
> 
>> -----Original Message-----
>> From: Lazar, Lijo <Lijo.Lazar@amd.com>
>> Sent: Tuesday, November 30, 2021 9:58 PM
>> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
>> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
>> Subject: Re: [PATCH V2 13/17] drm/amd/pm: do not expose the
>> smu_context structure used internally in power
>>
>>
>>
>> On 11/30/2021 1:12 PM, Evan Quan wrote:
>>> This can cover the power implementation details. And as what did for
>>> powerplay framework, we hook the smu_context to adev-
>>> powerplay.pp_handle.
>>>
>>> Signed-off-by: Evan Quan <evan.quan@amd.com>
>>> Change-Id: I3969c9f62a8b63dc6e4321a488d8f15022ffeb3d
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  6 --
>>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |  9 +++
>>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 51 ++++++++++------
>>>    drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       | 11 +---
>>>    drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 60
>> +++++++++++++------
>>>    .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  9 +--
>>>    .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |  9 +--
>>>    .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |  9 +--
>>>    .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  4 +-
>>>    .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  9 +--
>>>    .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  8 +--
>>>    11 files changed, 111 insertions(+), 74 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index c987813a4996..fefabd568483 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -99,7 +99,6 @@
>>>    #include "amdgpu_gem.h"
>>>    #include "amdgpu_doorbell.h"
>>>    #include "amdgpu_amdkfd.h"
>>> -#include "amdgpu_smu.h"
>>>    #include "amdgpu_discovery.h"
>>>    #include "amdgpu_mes.h"
>>>    #include "amdgpu_umc.h"
>>> @@ -950,11 +949,6 @@ struct amdgpu_device {
>>>
>>>    	/* powerplay */
>>>    	struct amd_powerplay		powerplay;
>>> -
>>> -	/* smu */
>>> -	struct smu_context		smu;
>>> -
>>> -	/* dpm */
>>>    	struct amdgpu_pm		pm;
>>>    	u32				cg_flags;
>>>    	u32				pg_flags;
>>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> index 7919e96e772b..da6a82430048 100644
>>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>> @@ -25,6 +25,9 @@
>>>    #define __KGD_PP_INTERFACE_H__
>>>
>>>    extern const struct amdgpu_ip_block_version pp_smu_ip_block;
>>> +extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
>>> +extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
>>> +extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
>>>
>>>    enum smu_event_type {
>>>    	SMU_EVENT_RESET_COMPLETE = 0,
>>> @@ -244,6 +247,12 @@ enum pp_power_type
>>>    	PP_PWR_TYPE_FAST,
>>>    };
>>>
>>> +enum smu_ppt_limit_type
>>> +{
>>> +	SMU_DEFAULT_PPT_LIMIT = 0,
>>> +	SMU_FAST_PPT_LIMIT,
>>> +};
>>> +
>>
>> This is a contradiction. If the entry point is dpm, this shouldn't be here and
>> the external interface doesn't need to know about internal datatypes.
> [Quan, Evan] This is needed by amdgpu_hwmon_show_power_label() from amdgpu_pm.c.
> So, it has to be put into some place which can be accessed from outside(of power).
> Then kgd_pp_interface.h is the right place.

The public data types are enum pp_power_type and enum pp_power_limit_level.

The first one tells about the type of power limits (fast/slow/sustained) 
and second one is about the min/max/default values for different limits.

To show the label, use the pp_power_type type.

> 
>>
>>>    #define PP_GROUP_MASK        0xF0000000
>>>    #define PP_GROUP_SHIFT       28
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> index 8f0ae58f4292..a5cbbf9367fe 100644
>>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>> @@ -31,6 +31,7 @@
>>>    #include "amdgpu_display.h"
>>>    #include "hwmgr.h"
>>>    #include <linux/power_supply.h>
>>> +#include "amdgpu_smu.h"
>>>
>>>    #define amdgpu_dpm_enable_bapm(adev, e) \
>>>
>>> ((adev)->powerplay.pp_funcs->enable_bapm((adev)-
>>> powerplay.pp_handle,
>>> (e))) @@ -213,7 +214,7 @@ int amdgpu_dpm_baco_reset(struct
>>> amdgpu_device *adev)
>>>
>>>    bool amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device
>> *adev)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (is_support_sw_smu(adev))
>>>    		return smu_mode1_reset_is_support(smu); @@ -223,7
>> +224,7 @@ bool
>>> amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device *adev)
>>>
>>>    int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (is_support_sw_smu(adev))
>>>    		return smu_mode1_reset(smu);
>>> @@ -276,7 +277,7 @@ int amdgpu_dpm_set_df_cstate(struct
>> amdgpu_device
>>> *adev,
>>>
>>>    int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device
>> *adev, bool en)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (is_support_sw_smu(adev))
>>>    		return smu_allow_xgmi_power_down(smu, en); @@ -341,7
>> +342,7 @@
>>> void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
>>>    		mutex_unlock(&adev->pm.mutex);
>>>
>>>    		if (is_support_sw_smu(adev))
>>> -			smu_set_ac_dc(&adev->smu);
>>> +			smu_set_ac_dc(adev->powerplay.pp_handle);
>>>    	}
>>>    }
>>>
>>> @@ -423,15 +424,16 @@ int amdgpu_pm_load_smu_firmware(struct
>>> amdgpu_device *adev, uint32_t *smu_versio
>>>
>>>    int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
>> enable)
>>>    {
>>> -	return smu_set_light_sbr(&adev->smu, enable);
>>> +	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
>>>    }
>>>
>>>    int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
>> *adev, uint32_t size)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	int ret = 0;
>>>
>>> -	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num)
>>> -		ret = adev->smu.ppt_funcs-
>>> send_hbm_bad_pages_num(&adev->smu, size);
>>> +	if (is_support_sw_smu(adev))
>>> +		ret = smu_send_hbm_bad_pages_num(smu, size);
>>>
>>>    	return ret;
>>>    }
>>> @@ -446,7 +448,7 @@ int amdgpu_dpm_get_dpm_freq_range(struct
>>> amdgpu_device *adev,
>>>
>>>    	switch (type) {
>>>    	case PP_SCLK:
>>> -		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
>> min, max);
>>> +		return smu_get_dpm_freq_range(adev-
>>> powerplay.pp_handle, SMU_SCLK,
>>> +min, max);
>>>    	default:
>>>    		return -EINVAL;
>>>    	}
>>> @@ -457,12 +459,14 @@ int amdgpu_dpm_set_soft_freq_range(struct
>> amdgpu_device *adev,
>>>    				   uint32_t min,
>>>    				   uint32_t max)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>>    	if (!is_support_sw_smu(adev))
>>>    		return -EOPNOTSUPP;
>>>
>>>    	switch (type) {
>>>    	case PP_SCLK:
>>> -		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
>> min, max);
>>> +		return smu_set_soft_freq_range(smu, SMU_SCLK, min,
>> max);
>>>    	default:
>>>    		return -EINVAL;
>>>    	}
>>> @@ -470,33 +474,41 @@ int amdgpu_dpm_set_soft_freq_range(struct
>>> amdgpu_device *adev,
>>>
>>>    int amdgpu_dpm_write_watermarks_table(struct amdgpu_device *adev)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>>    	if (!is_support_sw_smu(adev))
>>>    		return 0;
>>>
>>> -	return smu_write_watermarks_table(&adev->smu);
>>> +	return smu_write_watermarks_table(smu);
>>>    }
>>>
>>>    int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
>>>    			      enum smu_event_type event,
>>>    			      uint64_t event_arg)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>>    	if (!is_support_sw_smu(adev))
>>>    		return -EOPNOTSUPP;
>>>
>>> -	return smu_wait_for_event(&adev->smu, event, event_arg);
>>> +	return smu_wait_for_event(smu, event, event_arg);
>>>    }
>>>
>>>    int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
>> uint32_t *value)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>>    	if (!is_support_sw_smu(adev))
>>>    		return -EOPNOTSUPP;
>>>
>>> -	return smu_get_status_gfxoff(&adev->smu, value);
>>> +	return smu_get_status_gfxoff(smu, value);
>>>    }
>>>
>>>    uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
>> amdgpu_device *adev)
>>>    {
>>> -	return atomic64_read(&adev->smu.throttle_int_counter);
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>> +	return atomic64_read(&smu->throttle_int_counter);
>>>    }
>>>
>>>    /* amdgpu_dpm_gfx_state_change - Handle gfx power state change set
>>> @@ -518,10 +530,12 @@ void amdgpu_dpm_gfx_state_change(struct
>> amdgpu_device *adev,
>>>    int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
>>>    			    void *umc_ecc)
>>>    {
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>>    	if (!is_support_sw_smu(adev))
>>>    		return -EOPNOTSUPP;
>>>
>>> -	return smu_get_ecc_info(&adev->smu, umc_ecc);
>>> +	return smu_get_ecc_info(smu, umc_ecc);
>>>    }
>>>
>>>    struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct
>>> amdgpu_device *adev, @@ -919,9 +933,10 @@ int
>> amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
>>>    int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device *adev)
>>>    {
>>>    	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>> -	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
>>> -	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
>>> +	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
>>> +	    (is_support_sw_smu(adev) && smu->is_apu) ||
>>>    		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
>>>    		return true;
>>>
>>> @@ -944,7 +959,9 @@ int amdgpu_dpm_set_pp_table(struct
>> amdgpu_device
>>> *adev,
>>>
>>>    int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
>>>    {
>>> -	return adev->smu.cpu_core_num;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>> +	return smu->cpu_core_num;
>>>    }
>>>
>>>    void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev) diff
>>> --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> index 29791bb21fba..f44139b415b4 100644
>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
>>> @@ -205,12 +205,6 @@ enum smu_power_src_type
>>>    	SMU_POWER_SOURCE_COUNT,
>>>    };
>>>
>>> -enum smu_ppt_limit_type
>>> -{
>>> -	SMU_DEFAULT_PPT_LIMIT = 0,
>>> -	SMU_FAST_PPT_LIMIT,
>>> -};
>>> -
>>>    enum smu_ppt_limit_level
>>>    {
>>>    	SMU_PPT_LIMIT_MIN = -1,
>>> @@ -1389,10 +1383,6 @@ int smu_mode1_reset(struct smu_context
>> *smu);
>>>
>>>    extern const struct amd_ip_funcs smu_ip_funcs;
>>>
>>> -extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
>>> -extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
>>> -extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
>>> -
>>>    bool is_support_sw_smu(struct amdgpu_device *adev);
>>>    bool is_support_cclk_dpm(struct amdgpu_device *adev);
>>>    int smu_write_watermarks_table(struct smu_context *smu); @@ -1416,6
>>> +1406,7 @@ int smu_wait_for_event(struct smu_context *smu, enum
>> smu_event_type event,
>>>    int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
>>>    int smu_stb_collect_info(struct smu_context *smu, void *buff, uint32_t
>> size);
>>>    void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev);
>>> +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t
>>> +size);
>>>
>>>    #endif
>>>    #endif
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> index eaed5aba7547..2c3fd3cfef05 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
>>> @@ -468,7 +468,7 @@ bool is_support_sw_smu(struct amdgpu_device
>> *adev)
>>>
>>>    bool is_support_cclk_dpm(struct amdgpu_device *adev)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
>>>    		return false;
>>> @@ -572,7 +572,7 @@ static int
>>> smu_get_driver_allowed_feature_mask(struct smu_context *smu)
>>>
>>>    static int smu_set_funcs(struct amdgpu_device *adev)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
>>>    		smu->od_enabled = true;
>>> @@ -624,7 +624,11 @@ static int smu_set_funcs(struct amdgpu_device
>> *adev)
>>>    static int smu_early_init(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu;
>>> +
>>> +	smu = kzalloc(sizeof(struct smu_context), GFP_KERNEL);
>>> +	if (!smu)
>>> +		return -ENOMEM;
>>>
>>>    	smu->adev = adev;
>>>    	smu->pm_enabled = !!amdgpu_dpm;
>>> @@ -684,7 +688,7 @@ static int smu_set_default_dpm_table(struct
>> smu_context *smu)
>>>    static int smu_late_init(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	int ret = 0;
>>>
>>>    	smu_set_fine_grain_gfx_freq_parameters(smu);
>>> @@ -730,7 +734,7 @@ static int smu_late_init(void *handle)
>>>
>>>    	smu_get_fan_parameters(smu);
>>>
>>> -	smu_handle_task(&adev->smu,
>>> +	smu_handle_task(smu,
>>>    			smu->smu_dpm.dpm_level,
>>>    			AMD_PP_TASK_COMPLETE_INIT,
>>>    			false);
>>> @@ -1020,7 +1024,7 @@ static void smu_interrupt_work_fn(struct
>> work_struct *work)
>>>    static int smu_sw_init(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	int ret;
>>>
>>>    	smu->pool_size = adev->pm.smu_prv_buffer_size; @@ -1095,7
>> +1099,7
>>> @@ static int smu_sw_init(void *handle)
>>>    static int smu_sw_fini(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	int ret;
>>>
>>>    	ret = smu_smc_table_sw_fini(smu);
>>> @@ -1330,7 +1334,7 @@ static int smu_hw_init(void *handle)
>>>    {
>>>    	int ret;
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
>> {
>>>    		smu->pm_enabled = false;
>>> @@ -1344,10 +1348,10 @@ static int smu_hw_init(void *handle)
>>>    	}
>>>
>>>    	if (smu->is_apu) {
>>> -		smu_powergate_sdma(&adev->smu, false);
>>> +		smu_powergate_sdma(smu, false);
>>>    		smu_dpm_set_vcn_enable(smu, true);
>>>    		smu_dpm_set_jpeg_enable(smu, true);
>>> -		smu_set_gfx_cgpg(&adev->smu, true);
>>> +		smu_set_gfx_cgpg(smu, true);
>>>    	}
>>>
>>>    	if (!smu->pm_enabled)
>>> @@ -1501,13 +1505,13 @@ static int smu_smc_hw_cleanup(struct
>> smu_context *smu)
>>>    static int smu_hw_fini(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
>>>    		return 0;
>>>
>>>    	if (smu->is_apu) {
>>> -		smu_powergate_sdma(&adev->smu, true);
>>> +		smu_powergate_sdma(smu, true);
>>>    	}
>>>
>>>    	smu_dpm_set_vcn_enable(smu, false); @@ -1524,6 +1528,14 @@
>> static
>>> int smu_hw_fini(void *handle)
>>>    	return smu_smc_hw_cleanup(smu);
>>>    }
>>>
>>> +static void smu_late_fini(void *handle) {
>>> +	struct amdgpu_device *adev = handle;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +
>>> +	kfree(smu);
>>> +}
>>> +
>>
>> This doesn't look related to this change.
> [Quan, Evan] "smu" is updated as dynamically allocated. We need to find a place to get it freed.
> As did in powerplay framework, ->late_fini is the right place.

Thanks, missed the change for dynamic allocation.

>>
>>>    static int smu_reset(struct smu_context *smu)
>>>    {
>>>    	struct amdgpu_device *adev = smu->adev; @@ -1551,7 +1563,7 @@
>>> static int smu_reset(struct smu_context *smu)
>>>    static int smu_suspend(void *handle)
>>>    {
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	int ret;
>>>
>>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
>> @@
>>> -1570,7 +1582,7 @@ static int smu_suspend(void *handle)
>>>
>>>    	/* skip CGPG when in S0ix */
>>>    	if (smu->is_apu && !adev->in_s0ix)
>>> -		smu_set_gfx_cgpg(&adev->smu, false);
>>> +		smu_set_gfx_cgpg(smu, false);
>>>
>>>    	return 0;
>>>    }
>>> @@ -1579,7 +1591,7 @@ static int smu_resume(void *handle)
>>>    {
>>>    	int ret;
>>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
>>>    		return 0;
>>> @@ -1602,7 +1614,7 @@ static int smu_resume(void *handle)
>>>    	}
>>>
>>>    	if (smu->is_apu)
>>> -		smu_set_gfx_cgpg(&adev->smu, true);
>>> +		smu_set_gfx_cgpg(smu, true);
>>>
>>>    	smu->disable_uclk_switch = 0;
>>>
>>> @@ -2134,6 +2146,7 @@ const struct amd_ip_funcs smu_ip_funcs = {
>>>    	.sw_fini = smu_sw_fini,
>>>    	.hw_init = smu_hw_init,
>>>    	.hw_fini = smu_hw_fini,
>>> +	.late_fini = smu_late_fini,
>>>    	.suspend = smu_suspend,
>>>    	.resume = smu_resume,
>>>    	.is_idle = NULL,
>>> @@ -3198,7 +3211,7 @@ int smu_stb_collect_info(struct smu_context
>> *smu, void *buf, uint32_t size)
>>>    static int smu_stb_debugfs_open(struct inode *inode, struct file *filp)
>>>    {
>>>    	struct amdgpu_device *adev = filp->f_inode->i_private;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	unsigned char *buf;
>>>    	int r;
>>>
>>> @@ -3223,7 +3236,7 @@ static ssize_t smu_stb_debugfs_read(struct file
>> *filp, char __user *buf, size_t
>>>    				loff_t *pos)
>>>    {
>>>    	struct amdgpu_device *adev = filp->f_inode->i_private;
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>
>>>    	if (!filp->private_data)
>>> @@ -3264,7 +3277,7 @@ void amdgpu_smu_stb_debug_fs_init(struct
>> amdgpu_device *adev)
>>>    {
>>>    #if defined(CONFIG_DEBUG_FS)
>>>
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>
>>>    	if (!smu->stb_context.stb_buf_size)
>>>    		return;
>>> @@ -3276,5 +3289,14 @@ void amdgpu_smu_stb_debug_fs_init(struct
>> amdgpu_device *adev)
>>>    			    &smu_stb_debugfs_fops,
>>>    			    smu->stb_context.stb_buf_size);
>>>    #endif
>>> +}
>>> +
>>> +int smu_send_hbm_bad_pages_num(struct smu_context *smu, uint32_t
>>> +size) {
>>> +	int ret = 0;
>>> +
>>> +	if (smu->ppt_funcs->send_hbm_bad_pages_num)
>>> +		ret = smu->ppt_funcs->send_hbm_bad_pages_num(smu,
>> size);
>>>
>>> +	return ret;
>>
>> This also looks unrelated.
> [Quan, Evan] This was moved from amdgpu_dpm.c to here (amdgpu_smu.c).
> As smu_context is now an internal data structure for swsmu framework.
> Then the accessing for smu->ppt_funcs should be launched from amdgpu_smu.c.

May be this change can go together with the corresponding API refactor 
change.

Thanks,
Lijo

> 
> BR
> Evan
>>
>> Thanks,
>> Lijo
>>
>>>    }
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
>>> index 05defeee0c87..a03bbd2a7aa0 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
>>> @@ -2082,7 +2082,8 @@ static int arcturus_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			     struct i2c_msg *msg, int num_msgs)
>>>    {
>>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
>>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +	struct smu_table_context *smu_table = &smu->smu_table;
>>>    	struct smu_table *table = &smu_table->driver_table;
>>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>>>    	int i, j, r, c;
>>> @@ -2128,9 +2129,9 @@ static int arcturus_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			}
>>>    		}
>>>    	}
>>> -	mutex_lock(&adev->smu.mutex);
>>> -	r = smu_cmn_update_table(&adev->smu,
>> SMU_TABLE_I2C_COMMANDS, 0, req, true);
>>> -	mutex_unlock(&adev->smu.mutex);
>>> +	mutex_lock(&smu->mutex);
>>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
>> req, true);
>>> +	mutex_unlock(&smu->mutex);
>>>    	if (r)
>>>    		goto fail;
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
>>> index 2bb7816b245a..37e11716e919 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
>>> @@ -2779,7 +2779,8 @@ static int navi10_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			   struct i2c_msg *msg, int num_msgs)
>>>    {
>>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
>>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +	struct smu_table_context *smu_table = &smu->smu_table;
>>>    	struct smu_table *table = &smu_table->driver_table;
>>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>>>    	int i, j, r, c;
>>> @@ -2825,9 +2826,9 @@ static int navi10_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			}
>>>    		}
>>>    	}
>>> -	mutex_lock(&adev->smu.mutex);
>>> -	r = smu_cmn_update_table(&adev->smu,
>> SMU_TABLE_I2C_COMMANDS, 0, req, true);
>>> -	mutex_unlock(&adev->smu.mutex);
>>> +	mutex_lock(&smu->mutex);
>>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
>> req, true);
>>> +	mutex_unlock(&smu->mutex);
>>>    	if (r)
>>>    		goto fail;
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
>>> index 777f717c37ae..6a5064f4ea86 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
>>> @@ -3459,7 +3459,8 @@ static int sienna_cichlid_i2c_xfer(struct
>> i2c_adapter *i2c_adap,
>>>    				   struct i2c_msg *msg, int num_msgs)
>>>    {
>>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
>>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +	struct smu_table_context *smu_table = &smu->smu_table;
>>>    	struct smu_table *table = &smu_table->driver_table;
>>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>>>    	int i, j, r, c;
>>> @@ -3505,9 +3506,9 @@ static int sienna_cichlid_i2c_xfer(struct
>> i2c_adapter *i2c_adap,
>>>    			}
>>>    		}
>>>    	}
>>> -	mutex_lock(&adev->smu.mutex);
>>> -	r = smu_cmn_update_table(&adev->smu,
>> SMU_TABLE_I2C_COMMANDS, 0, req, true);
>>> -	mutex_unlock(&adev->smu.mutex);
>>> +	mutex_lock(&smu->mutex);
>>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
>> req, true);
>>> +	mutex_unlock(&smu->mutex);
>>>    	if (r)
>>>    		goto fail;
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
>>> index 28b7c0562b99..2a53b5b1d261 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
>>> @@ -1372,7 +1372,7 @@ static int smu_v11_0_set_irq_state(struct
>> amdgpu_device *adev,
>>>    				   unsigned tyep,
>>>    				   enum amdgpu_interrupt_state state)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	uint32_t low, high;
>>>    	uint32_t val = 0;
>>>
>>> @@ -1441,7 +1441,7 @@ static int smu_v11_0_irq_process(struct
>> amdgpu_device *adev,
>>>    				 struct amdgpu_irq_src *source,
>>>    				 struct amdgpu_iv_entry *entry)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	uint32_t client_id = entry->client_id;
>>>    	uint32_t src_id = entry->src_id;
>>>    	/*
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
>>> index 6e781cee8bb6..3c82f5455f88 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
>>> @@ -1484,7 +1484,8 @@ static int aldebaran_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			      struct i2c_msg *msg, int num_msgs)
>>>    {
>>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
>>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>> +	struct smu_table_context *smu_table = &smu->smu_table;
>>>    	struct smu_table *table = &smu_table->driver_table;
>>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
>>>    	int i, j, r, c;
>>> @@ -1530,9 +1531,9 @@ static int aldebaran_i2c_xfer(struct i2c_adapter
>> *i2c_adap,
>>>    			}
>>>    		}
>>>    	}
>>> -	mutex_lock(&adev->smu.mutex);
>>> -	r = smu_cmn_update_table(&adev->smu,
>> SMU_TABLE_I2C_COMMANDS, 0, req, true);
>>> -	mutex_unlock(&adev->smu.mutex);
>>> +	mutex_lock(&smu->mutex);
>>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
>> req, true);
>>> +	mutex_unlock(&smu->mutex);
>>>    	if (r)
>>>    		goto fail;
>>>
>>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
>>> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
>>> index 55421ea622fb..4ed01e9d88fb 100644
>>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
>>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
>>> @@ -1195,7 +1195,7 @@ static int smu_v13_0_set_irq_state(struct
>> amdgpu_device *adev,
>>>    				   unsigned tyep,
>>>    				   enum amdgpu_interrupt_state state)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	uint32_t low, high;
>>>    	uint32_t val = 0;
>>>
>>> @@ -1270,7 +1270,7 @@ static int smu_v13_0_irq_process(struct
>> amdgpu_device *adev,
>>>    				 struct amdgpu_irq_src *source,
>>>    				 struct amdgpu_iv_entry *entry)
>>>    {
>>> -	struct smu_context *smu = &adev->smu;
>>> +	struct smu_context *smu = adev->powerplay.pp_handle;
>>>    	uint32_t client_id = entry->client_id;
>>>    	uint32_t src_id = entry->src_id;
>>>    	/*
>>> @@ -1316,11 +1316,11 @@ static int smu_v13_0_irq_process(struct
>> amdgpu_device *adev,
>>>    			switch (ctxid) {
>>>    			case 0x3:
>>>    				dev_dbg(adev->dev, "Switched to AC
>> mode!\n");
>>> -				smu_v13_0_ack_ac_dc_interrupt(&adev-
>>> smu);
>>> +				smu_v13_0_ack_ac_dc_interrupt(smu);
>>>    				break;
>>>    			case 0x4:
>>>    				dev_dbg(adev->dev, "Switched to DC
>> mode!\n");
>>> -				smu_v13_0_ack_ac_dc_interrupt(&adev-
>>> smu);
>>> +				smu_v13_0_ack_ac_dc_interrupt(smu);
>>>    				break;
>>>    			case 0x7:
>>>    				/*
>>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power
  2021-12-01  3:33       ` Lazar, Lijo
@ 2021-12-01  7:07         ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  7:07 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Wednesday, December 1, 2021 11:33 AM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Feng, Kenneth
> <Kenneth.Feng@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>
> Subject: Re: [PATCH V2 01/17] drm/amd/pm: do not expose implementation
> details to other blocks out of power
> 
> 
> 
> On 12/1/2021 7:29 AM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -----Original Message-----
> >> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of
> >> Lazar, Lijo
> >> Sent: Tuesday, November 30, 2021 4:10 PM
> >> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Feng, Kenneth
> >> <Kenneth.Feng@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>
> >> Subject: Re: [PATCH V2 01/17] drm/amd/pm: do not expose
> >> implementation details to other blocks out of power
> >>
> >>
> >>
> >> On 11/30/2021 1:12 PM, Evan Quan wrote:
> >>> Those implementation details(whether swsmu supported, some
> ppt_funcs
> >>> supported, accessing internal statistics ...)should be kept
> >>> internally. It's not a good practice and even error prone to expose
> >> implementation details.
> >>>
> >>> Signed-off-by: Evan Quan <evan.quan@amd.com>
> >>> Change-Id: Ibca3462ceaa26a27a9145282b60c6ce5deca7752
> >>> ---
> >>>    drivers/gpu/drm/amd/amdgpu/aldebaran.c        |  2 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   | 25 ++---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  6 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       | 18 +---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |  7 --
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c       |  5 +-
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c       |  5 +-
> >>>    drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c   |  2 +-
> >>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |  4 +
> >>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 95
> >> +++++++++++++++++++
> >>>    drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       | 25 ++++-
> >>>    drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       |  9 +-
> >>>    drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 16 ++--
> >>>    13 files changed, 155 insertions(+), 64 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> >>> b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> >>> index bcfdb63b1d42..a545df4efce1 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
> >>> @@ -260,7 +260,7 @@ static int aldebaran_mode2_restore_ip(struct
> >> amdgpu_device *adev)
> >>>    	adev->gfx.rlc.funcs->resume(adev);
> >>>
> >>>    	/* Wait for FW reset event complete */
> >>> -	r = smu_wait_for_event(adev, SMU_EVENT_RESET_COMPLETE, 0);
> >>> +	r = amdgpu_dpm_wait_for_event(adev,
> >> SMU_EVENT_RESET_COMPLETE, 0);
> >>>    	if (r) {
> >>>    		dev_err(adev->dev,
> >>>    			"Failed to get response from firmware after reset\n");
> >> diff --git
> >>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> index 164d6a9e9fbb..0d1f00b24aae 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> @@ -1585,22 +1585,25 @@ static int amdgpu_debugfs_sclk_set(void
> >>> *data,
> >> u64 val)
> >>>    		return ret;
> >>>    	}
> >>>
> >>> -	if (is_support_sw_smu(adev)) {
> >>> -		ret = smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> >> &min_freq, &max_freq);
> >>> -		if (ret || val > max_freq || val < min_freq)
> >>> -			return -EINVAL;
> >>> -		ret = smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> >> (uint32_t)val, (uint32_t)val);
> >>> -	} else {
> >>> -		return 0;
> >>> +	ret = amdgpu_dpm_get_dpm_freq_range(adev, PP_SCLK,
> >> &min_freq, &max_freq);
> >>> +	if (ret == -EOPNOTSUPP) {
> >>> +		ret = 0;
> >>> +		goto out;
> >>>    	}
> >>> +	if (ret || val > max_freq || val < min_freq) {
> >>> +		ret = -EINVAL;
> >>> +		goto out;
> >>> +	}
> >>> +
> >>> +	ret = amdgpu_dpm_set_soft_freq_range(adev, PP_SCLK,
> >> (uint32_t)val, (uint32_t)val);
> >>> +	if (ret)
> >>> +		ret = -EINVAL;
> >>>
> >>> +out:
> >>>    	pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> >>>    	pm_runtime_put_autosuspend(adev_to_drm(adev)->dev);
> >>>
> >>> -	if (ret)
> >>> -		return -EINVAL;
> >>> -
> >>> -	return 0;
> >>> +	return ret;
> >>>    }
> >>>
> >>>    DEFINE_DEBUGFS_ATTRIBUTE(fops_ib_preempt, NULL, diff --git
> >>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> index 1989f9e9379e..41cc1ffb5809 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> @@ -2617,7 +2617,7 @@ static int amdgpu_device_ip_late_init(struct
> >> amdgpu_device *adev)
> >>>    	if (adev->asic_type == CHIP_ARCTURUS &&
> >>>    	    amdgpu_passthrough(adev) &&
> >>>    	    adev->gmc.xgmi.num_physical_nodes > 1)
> >>> -		smu_set_light_sbr(&adev->smu, true);
> >>> +		amdgpu_dpm_set_light_sbr(adev, true);
> >>>
> >>>    	if (adev->gmc.xgmi.num_physical_nodes > 1) {
> >>>    		mutex_lock(&mgpu_info.mutex);
> >>> @@ -2857,7 +2857,7 @@ static int
> >> amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
> >>>    	int i, r;
> >>>
> >>>    	if (adev->in_s0ix)
> >>> -		amdgpu_gfx_state_change_set(adev,
> >> sGpuChangeState_D3Entry);
> >>> +		amdgpu_dpm_gfx_state_change(adev,
> >> sGpuChangeState_D3Entry);
> >>>
> >>>    	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
> >>>    		if (!adev->ip_blocks[i].status.valid)
> >>> @@ -3982,7 +3982,7 @@ int amdgpu_device_resume(struct drm_device
> >> *dev, bool fbcon)
> >>>    		return 0;
> >>>
> >>>    	if (adev->in_s0ix)
> >>> -		amdgpu_gfx_state_change_set(adev,
> >> sGpuChangeState_D0Entry);
> >>> +		amdgpu_dpm_gfx_state_change(adev,
> >> sGpuChangeState_D0Entry);
> >>>
> >>>    	/* post card */
> >>>    	if (amdgpu_device_need_post(adev)) { diff --git
> >>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> >>> index 1916ec84dd71..3d8f82dc8c97 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> >>> @@ -615,7 +615,7 @@ int amdgpu_get_gfx_off_status(struct
> >> amdgpu_device
> >>> *adev, uint32_t *value)
> >>>
> >>>    	mutex_lock(&adev->gfx.gfx_off_mutex);
> >>>
> >>> -	r = smu_get_status_gfxoff(adev, value);
> >>> +	r = amdgpu_dpm_get_status_gfxoff(adev, value);
> >>>
> >>>    	mutex_unlock(&adev->gfx.gfx_off_mutex);
> >>>
> >>> @@ -852,19 +852,3 @@ int amdgpu_gfx_get_num_kcq(struct
> >> amdgpu_device *adev)
> >>>    	}
> >>>    	return amdgpu_num_kcq;
> >>>    }
> >>> -
> >>> -/* amdgpu_gfx_state_change_set - Handle gfx power state change set
> >>> - * @adev: amdgpu_device pointer
> >>> - * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
> >>> -sGpuChangeState_D3Entry)
> >>> - *
> >>> - */
> >>> -
> >>> -void amdgpu_gfx_state_change_set(struct amdgpu_device *adev,
> enum
> >>> gfx_change_state state) -{
> >>> -	mutex_lock(&adev->pm.mutex);
> >>> -	if (adev->powerplay.pp_funcs &&
> >>> -	    adev->powerplay.pp_funcs->gfx_state_change_set)
> >>> -		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> >>> -			(adev)->powerplay.pp_handle, state));
> >>> -	mutex_unlock(&adev->pm.mutex);
> >>> -}
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> >>> index f851196c83a5..776c886fd94a 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> >>> @@ -47,12 +47,6 @@ enum amdgpu_gfx_pipe_priority {
> >>>    	AMDGPU_GFX_PIPE_PRIO_HIGH = AMDGPU_RING_PRIO_2
> >>>    };
> >>>
> >>> -/* Argument for PPSMC_MSG_GpuChangeState */ -enum
> >> gfx_change_state {
> >>> -	sGpuChangeState_D0Entry = 1,
> >>> -	sGpuChangeState_D3Entry,
> >>> -};
> >>> -
> >>>    #define AMDGPU_GFX_QUEUE_PRIORITY_MINIMUM  0
> >>>    #define AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM  15
> >>>
> >>> @@ -410,5 +404,4 @@ int amdgpu_gfx_cp_ecc_error_irq(struct
> >> amdgpu_device *adev,
> >>>    uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg);
> >>>    void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg,
> >> uint32_t v);
> >>>    int amdgpu_gfx_get_num_kcq(struct amdgpu_device *adev); -void
> >>> amdgpu_gfx_state_change_set(struct amdgpu_device *adev, enum
> >> gfx_change_state state);
> >>>    #endif
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> >>> index 3c623e589b79..35c4aec04a7e 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
> >>> @@ -901,7 +901,7 @@ static void amdgpu_ras_get_ecc_info(struct
> >> amdgpu_device *adev, struct ras_err_d
> >>>    	 * choosing right query method according to
> >>>    	 * whether smu support query error information
> >>>    	 */
> >>> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(ras->umc_ecc));
> >>> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(ras->umc_ecc));
> >>>    	if (ret == -EOPNOTSUPP) {
> >>>    		if (adev->umc.ras_funcs &&
> >>>    			adev->umc.ras_funcs->query_ras_error_count)
> >>> @@ -2132,8 +2132,7 @@ int amdgpu_ras_recovery_init(struct
> >> amdgpu_device *adev)
> >>>    		if (ret)
> >>>    			goto free;
> >>>
> >>> -		if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num)
> >>> -			adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num(&adev->smu, con-
> >>> eeprom_control.ras_num_recs);
> >>> +		amdgpu_dpm_send_hbm_bad_pages_num(adev,
> >>> +con->eeprom_control.ras_num_recs);
> >>>    	}
> >>>
> >>>    #ifdef CONFIG_X86_MCE_AMD
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> >>> index 6e4bea012ea4..5fed26c8db44 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umc.c
> >>> @@ -97,7 +97,7 @@ int amdgpu_umc_process_ras_data_cb(struct
> >> amdgpu_device *adev,
> >>>    	int ret = 0;
> >>>
> >>>    	kgd2kfd_set_sram_ecc_flag(adev->kfd.dev);
> >>> -	ret = smu_get_ecc_info(&adev->smu, (void *)&(con->umc_ecc));
> >>> +	ret = amdgpu_dpm_get_ecc_info(adev, (void *)&(con->umc_ecc));
> >>>    	if (ret == -EOPNOTSUPP) {
> >>>    		if (adev->umc.ras_funcs &&
> >>>    		    adev->umc.ras_funcs->query_ras_error_count)
> >>> @@ -160,8 +160,7 @@ int amdgpu_umc_process_ras_data_cb(struct
> >> amdgpu_device *adev,
> >>>    						err_data->err_addr_cnt);
> >>>    			amdgpu_ras_save_bad_pages(adev);
> >>>
> >>> -			if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num)
> >>> -				adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num(&adev->smu, con-
> >>> eeprom_control.ras_num_recs);
> >>> +			amdgpu_dpm_send_hbm_bad_pages_num(adev,
> >>> +con->eeprom_control.ras_num_recs);
> >>>    		}
> >>>
> >>>    		amdgpu_ras_reset_gpu(adev);
> >>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> >>> b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> >>> index deae12dc777d..329a4c89f1e6 100644
> >>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> >>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> >>> @@ -222,7 +222,7 @@ void
> >>> kfd_smi_event_update_thermal_throttling(struct kfd_dev *dev,
> >>>
> >>>    	len = snprintf(fifo_in, sizeof(fifo_in), "%x %llx:%llx\n",
> >>>    		       KFD_SMI_EVENT_THERMAL_THROTTLE, throttle_bitmask,
> >>> -		       atomic64_read(&dev->adev->smu.throttle_int_counter));
> >>> +		       amdgpu_dpm_get_thermal_throttling_counter(dev-
> >>> adev));
> >>>
> >>>    	add_event_to_kfifo(dev, KFD_SMI_EVENT_THERMAL_THROTTLE,
> >> 	fifo_in, len);
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> index 5c0867ebcfce..2e295facd086 100644
> >>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> @@ -26,6 +26,10 @@
> >>>
> >>>    extern const struct amdgpu_ip_block_version pp_smu_ip_block;
> >>>
> >>> +enum smu_event_type {
> >>> +	SMU_EVENT_RESET_COMPLETE = 0,
> >>> +};
> >>> +
> >>>    struct amd_vce_state {
> >>>    	/* vce clocks */
> >>>    	u32 evclk;
> >>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> index 08362d506534..9b332c8a0079 100644
> >>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> @@ -1614,3 +1614,98 @@ int amdgpu_pm_load_smu_firmware(struct
> >>> amdgpu_device *adev, uint32_t *smu_versio
> >>>
> >>>    	return 0;
> >>>    }
> >>> +
> >>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> >> enable)
> >>> +{
> >>> +	return smu_set_light_sbr(&adev->smu, enable); }
> >>> +
> >>> +int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> >> *adev,
> >>> +uint32_t size) {
> >>> +	int ret = 0;
> >>> +
> >>> +	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num)
> >>> +		ret = adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num(&adev->smu,
> >>> +size);
> >>> +
> >>> +	return ret;
> >>> +}
> >>> +
> >>> +int amdgpu_dpm_get_dpm_freq_range(struct amdgpu_device *adev,
> >>> +				  enum pp_clock_type type,
> >>> +				  uint32_t *min,
> >>> +				  uint32_t *max)
> >>> +{
> >>> +	if (!is_support_sw_smu(adev))
> >>> +		return -EOPNOTSUPP;
> >>> +
> >>> +	switch (type) {
> >>> +	case PP_SCLK:
> >>> +		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> >> min, max);
> >>> +	default:
> >>> +		return -EINVAL;
> >>> +	}
> >>> +}
> >>> +
> >>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> >>> +				   enum pp_clock_type type,
> >>> +				   uint32_t min,
> >>> +				   uint32_t max)
> >>> +{
> >>> +	if (!is_support_sw_smu(adev))
> >>> +		return -EOPNOTSUPP;
> >>> +
> >>> +	switch (type) {
> >>> +	case PP_SCLK:
> >>> +		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> >> min, max);
> >>> +	default:
> >>> +		return -EINVAL;
> >>> +	}
> >>> +}
> >>> +
> >>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
> >>> +			      enum smu_event_type event,
> >>> +			      uint64_t event_arg)
> >>> +{
> >>> +	if (!is_support_sw_smu(adev))
> >>> +		return -EOPNOTSUPP;
> >>> +
> >>> +	return smu_wait_for_event(&adev->smu, event, event_arg); }
> >>> +
> >>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> >> uint32_t
> >>> +*value) {
> >>> +	if (!is_support_sw_smu(adev))
> >>> +		return -EOPNOTSUPP;
> >>> +
> >>> +	return smu_get_status_gfxoff(&adev->smu, value); }
> >>> +
> >>> +uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
> >>> +amdgpu_device *adev) {
> >>> +	return atomic64_read(&adev->smu.throttle_int_counter);
> >>> +}
> >>> +
> >>> +/* amdgpu_dpm_gfx_state_change - Handle gfx power state change
> set
> >>> + * @adev: amdgpu_device pointer
> >>> + * @state: gfx power state(1 -sGpuChangeState_D0Entry and 2
> >>> +-sGpuChangeState_D3Entry)
> >>> + *
> >>> + */
> >>> +void amdgpu_dpm_gfx_state_change(struct amdgpu_device *adev,
> >>> +				 enum gfx_change_state state)
> >>> +{
> >>> +	mutex_lock(&adev->pm.mutex);
> >>> +	if (adev->powerplay.pp_funcs &&
> >>> +	    adev->powerplay.pp_funcs->gfx_state_change_set)
> >>> +		((adev)->powerplay.pp_funcs->gfx_state_change_set(
> >>> +			(adev)->powerplay.pp_handle, state));
> >>> +	mutex_unlock(&adev->pm.mutex);
> >>> +}
> >>> +
> >>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> >>> +			    void *umc_ecc)
> >>> +{
> >>> +	if (!is_support_sw_smu(adev))
> >>> +		return -EOPNOTSUPP;
> >>> +
> >>
> >> In general, I don't think we need to keep this check everywhere to
> >> make
> >> amdgpu_dpm_* backwards compatible.  The usage is also inconsistent.
> >> For
> >> ex: amdgpu_dpm_get_thermal_throttling_counter doesn't have any
> >> is_support_sw_smu check whereas amdgpu_dpm_get_ecc_info() has it.
> >> There is no reason to keep adding is_support_sw_smu() check for every
> >> new public API. For sure, they are not going to work with powerplay
> subsystem.
> >>
> >> I would rather prefer to leave old things and create amdgpu_smu_* for
> >> anything which is supported only in smu subsystem. It's easier to
> >> read from code perspective also - separate the ones which is
> >> supported by smu component and not supported in older powerplay
> components.
> >>
> >> Only for the common ones that are supported in powerplay and smu,
> >> keep amdgpu_dpm_*, for others preference would be to keep
> amdgpu_smu_*.
> > [Quan, Evan] I get your point. However, then it will bring back the problem
> we are trying to avoid.
> > That is the caller need to know whether the amdgpu_smu_* can be used.
> > They need to know whether the swsmu framework is supported on some
> > ASIC. >
> 
> swsmu has been for sometime. I'm suggesting to move away from
> amdgpu_dpm_* which is the legacy interface. There is no need to add new
> dpm_* APIs which are supported only in swsmu component. We only need
> to maintain the existing usage of dpm_*.
> 
> For the newer ones, let us move to component based APIs like
> amdgpu_smu_*. All the clients of swsmu are part of this component based
> architecture and they need to be aware of services of swsmu. It is similar to
> what is followed in other components like amdgpu_gmc_*,
> amdgpu_vcn* etc.
[Quan, Evan] I'm open to the idea that move all swsmu based APIs to amdgpu_smu_*.
Maybe we can list that as a TODO in the further optimizations. 
What I worried is the case below:
1. For gfx v9, Arcturus and Aldebaran support swsmu while other ASICs not. 
2. Then for any swsmu API called from gfx_v9_0.c, there will be a "if (adev->asic_type == CHIP_ARCTURUS)" or similar guard needed. That will need the user know the details above.

For this patch, can we just stick to original amdgpu_dpm_* way? As I really do not want to involve too many changes.

BR
Evan
> 
> Thanks,
> Lijo
> 
> > And yes, there is some inconsistent cases existing in current power code.
> Maybe we can create new patche(s) to fix them?
> > For this patch series, I would like to avoid any real code logic change.
> >
> > BR
> > Evan
> >>
> >> Thanks,
> >> Lijo
> >>
> >>> +	return smu_get_ecc_info(&adev->smu, umc_ecc); }
> >>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> index 16e3f72d31b9..7289d379a9fb 100644
> >>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> @@ -23,6 +23,12 @@
> >>>    #ifndef __AMDGPU_DPM_H__
> >>>    #define __AMDGPU_DPM_H__
> >>>
> >>> +/* Argument for PPSMC_MSG_GpuChangeState */ enum
> >> gfx_change_state {
> >>> +	sGpuChangeState_D0Entry = 1,
> >>> +	sGpuChangeState_D3Entry,
> >>> +};
> >>> +
> >>>    enum amdgpu_int_thermal_type {
> >>>    	THERMAL_TYPE_NONE,
> >>>    	THERMAL_TYPE_EXTERNAL,
> >>> @@ -574,5 +580,22 @@ void amdgpu_dpm_enable_vce(struct
> >> amdgpu_device *adev, bool enable);
> >>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> >> enable);
> >>>    void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> >>>    int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
> >> uint32_t
> >>> *smu_version);
> >>> -
> >>> +int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> >>> +enable); int amdgpu_dpm_send_hbm_bad_pages_num(struct
> >> amdgpu_device
> >>> +*adev, uint32_t size); int amdgpu_dpm_get_dpm_freq_range(struct
> >> amdgpu_device *adev,
> >>> +				       enum pp_clock_type type,
> >>> +				       uint32_t *min,
> >>> +				       uint32_t *max);
> >>> +int amdgpu_dpm_set_soft_freq_range(struct amdgpu_device *adev,
> >>> +				        enum pp_clock_type type,
> >>> +				        uint32_t min,
> >>> +				        uint32_t max);
> >>> +int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev, enum
> >> smu_event_type event,
> >>> +		       uint64_t event_arg);
> >>> +int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> >> uint32_t
> >>> +*value); uint64_t
> amdgpu_dpm_get_thermal_throttling_counter(struct
> >>> +amdgpu_device *adev); void amdgpu_dpm_gfx_state_change(struct
> >> amdgpu_device *adev,
> >>> +				 enum gfx_change_state state);
> >>> +int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> >>> +			    void *umc_ecc);
> >>>    #endif
> >>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> index f738f7dc20c9..29791bb21fba 100644
> >>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> @@ -241,11 +241,6 @@ struct smu_user_dpm_profile {
> >>>    	uint32_t clk_dependency;
> >>>    };
> >>>
> >>> -enum smu_event_type {
> >>> -
> >>> -	SMU_EVENT_RESET_COMPLETE = 0,
> >>> -};
> >>> -
> >>>    #define SMU_TABLE_INIT(tables, table_id, s, a, d)	\
> >>>    	do {						\
> >>>    		tables[table_id].size = s;		\
> >>> @@ -1412,11 +1407,11 @@ int smu_set_ac_dc(struct smu_context
> *smu);
> >>>
> >>>    int smu_allow_xgmi_power_down(struct smu_context *smu, bool en);
> >>>
> >>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
> >>> *value);
> >>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t
> >>> +*value);
> >>>
> >>>    int smu_set_light_sbr(struct smu_context *smu, bool enable);
> >>>
> >>> -int smu_wait_for_event(struct amdgpu_device *adev, enum
> >>> smu_event_type event,
> >>> +int smu_wait_for_event(struct smu_context *smu, enum
> >> smu_event_type
> >>> +event,
> >>>    		       uint64_t event_arg);
> >>>    int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
> >>>    int smu_stb_collect_info(struct smu_context *smu, void *buff,
> >>> uint32_t size); diff --git
> >> a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> index 5839918cb574..ef7d0e377965 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> @@ -100,17 +100,14 @@ static int smu_sys_set_pp_feature_mask(void
> >> *handle,
> >>>    	return ret;
> >>>    }
> >>>
> >>> -int smu_get_status_gfxoff(struct amdgpu_device *adev, uint32_t
> >>> *value)
> >>> +int smu_get_status_gfxoff(struct smu_context *smu, uint32_t *value)
> >>>    {
> >>> -	int ret = 0;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	if (!smu->ppt_funcs->get_gfx_off_status)
> >>> +		return -EINVAL;
> >>>
> >>> -	if (is_support_sw_smu(adev) && smu->ppt_funcs-
> >>> get_gfx_off_status)
> >>> -		*value = smu_get_gfx_off_status(smu);
> >>> -	else
> >>> -		ret = -EINVAL;
> >>> +	*value = smu_get_gfx_off_status(smu);
> >>>
> >>> -	return ret;
> >>> +	return 0;
> >>>    }
> >>>
> >>>    int smu_set_soft_freq_range(struct smu_context *smu, @@ -3167,11
> >>> +3164,10 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
> >>>    	.get_smu_prv_buf_details = smu_get_prv_buffer_details,
> >>>    };
> >>>
> >>> -int smu_wait_for_event(struct amdgpu_device *adev, enum
> >>> smu_event_type event,
> >>> +int smu_wait_for_event(struct smu_context *smu, enum
> >> smu_event_type
> >>> +event,
> >>>    		       uint64_t event_arg)
> >>>    {
> >>>    	int ret = -EINVAL;
> >>> -	struct smu_context *smu = &adev->smu;
> >>>
> >>>    	if (smu->ppt_funcs->wait_for_event) {
> >>>    		mutex_lock(&smu->mutex);
> >>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-12-01  4:19       ` Lazar, Lijo
@ 2021-12-01  7:17         ` Quan, Evan
  2021-12-01  7:36           ` Lazar, Lijo
  0 siblings, 1 reply; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  7:17 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Wednesday, December 1, 2021 12:19 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those
> APIs used only by legacy ASICs(si/kv)
> 
> 
> 
> On 12/1/2021 8:43 AM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -----Original Message-----
> >> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> >> Sent: Tuesday, November 30, 2021 9:21 PM
> >> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig,
> Christian
> >> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> >> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for
> those
> >> APIs used only by legacy ASICs(si/kv)
> >>
> >>
> >>
> >> On 11/30/2021 1:12 PM, Evan Quan wrote:
> >>> Those APIs are used only by legacy ASICs(si/kv). They cannot be
> >>> shared by other ASICs. So, we create a new holder for them.
> >>>
> >>> Signed-off-by: Evan Quan <evan.quan@amd.com>
> >>> Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
> >>> --
> >>> v1->v2:
> >>>     - move other APIs used by si/kv in amdgpu_atombios.c to the new
> >>>       holder also(Alex)
> >>> ---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
> >>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
> >>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
> >>>    drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
> >>>    drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
> >>>    drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
> >>>    drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453
> >> +++++++++++++++++
> >>>    drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
> >>>    drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
> >>>    10 files changed, 1534 insertions(+), 1470 deletions(-)
> >>>    create mode 100644
> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>>    create mode 100644
> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>> index 12a6b1c99c93..f2e447212e62 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>> @@ -1083,427 +1083,6 @@ int
> >> amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
> >>>    	return 0;
> >>>    }
> >>>
> >>> -int amdgpu_atombios_get_memory_pll_dividers(struct
> amdgpu_device
> >> *adev,
> >>> -					    u32 clock,
> >>> -					    bool strobe_mode,
> >>> -					    struct atom_mpll_param
> >> *mpll_param)
> >>> -{
> >>> -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> >>> -	int index = GetIndexIntoMasterTable(COMMAND,
> >> ComputeMemoryClockParam);
> >>> -	u8 frev, crev;
> >>> -
> >>> -	memset(&args, 0, sizeof(args));
> >>> -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> >>> -
> >>> -	if (!amdgpu_atom_parse_cmd_header(adev-
> >>> mode_info.atom_context, index, &frev, &crev))
> >>> -		return -EINVAL;
> >>> -
> >>> -	switch (frev) {
> >>> -	case 2:
> >>> -		switch (crev) {
> >>> -		case 1:
> >>> -			/* SI */
> >>> -			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> >>> -			args.ucInputFlag = 0;
> >>> -			if (strobe_mode)
> >>> -				args.ucInputFlag |=
> >> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> >>> -
> >>> -			amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> -
> >>> -			mpll_param->clkfrac =
> >> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> >>> -			mpll_param->clkf =
> >> le16_to_cpu(args.ulFbDiv.usFbDiv);
> >>> -			mpll_param->post_div = args.ucPostDiv;
> >>> -			mpll_param->dll_speed = args.ucDllSpeed;
> >>> -			mpll_param->bwcntl = args.ucBWCntl;
> >>> -			mpll_param->vco_mode =
> >>> -				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> >>> -			mpll_param->yclk_sel =
> >>> -				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> >>> -			mpll_param->qdr =
> >>> -				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> >>> -			mpll_param->half_rate =
> >>> -				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> >>> -			break;
> >>> -		default:
> >>> -			return -EINVAL;
> >>> -		}
> >>> -		break;
> >>> -	default:
> >>> -		return -EINVAL;
> >>> -	}
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -void amdgpu_atombios_set_engine_dram_timings(struct
> amdgpu_device
> >> *adev,
> >>> -					     u32 eng_clock, u32 mem_clock)
> >>> -{
> >>> -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> >>> -	int index = GetIndexIntoMasterTable(COMMAND,
> >> DynamicMemorySettings);
> >>> -	u32 tmp;
> >>> -
> >>> -	memset(&args, 0, sizeof(args));
> >>> -
> >>> -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> >>> -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> >>> -
> >>> -	args.ulTargetEngineClock = cpu_to_le32(tmp);
> >>> -	if (mem_clock)
> >>> -		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> >> SET_CLOCK_FREQ_MASK);
> >>> -
> >>> -	amdgpu_atom_execute_table(adev->mode_info.atom_context,
> >> index, (uint32_t *)&args);
> >>> -}
> >>> -
> >>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> >> *adev,
> >>> -					  u16 *vddc, u16 *vddci, u16 *mvdd)
> >>> -{
> >>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> >>> -	u8 frev, crev;
> >>> -	u16 data_offset;
> >>> -	union firmware_info *firmware_info;
> >>> -
> >>> -	*vddc = 0;
> >>> -	*vddci = 0;
> >>> -	*mvdd = 0;
> >>> -
> >>> -	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> -				   &frev, &crev, &data_offset)) {
> >>> -		firmware_info =
> >>> -			(union firmware_info *)(mode_info->atom_context-
> >>> bios +
> >>> -						data_offset);
> >>> -		*vddc = le16_to_cpu(firmware_info-
> >>> info_14.usBootUpVDDCVoltage);
> >>> -		if ((frev == 2) && (crev >= 2)) {
> >>> -			*vddci = le16_to_cpu(firmware_info-
> >>> info_22.usBootUpVDDCIVoltage);
> >>> -			*mvdd = le16_to_cpu(firmware_info-
> >>> info_22.usBootUpMVDDCVoltage);
> >>> -		}
> >>> -	}
> >>> -}
> >>> -
> >>> -union set_voltage {
> >>> -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> >>> -	struct _SET_VOLTAGE_PARAMETERS v1;
> >>> -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> >>> -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> >>> -};
> >>> -
> >>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> >> voltage_type,
> >>> -			     u16 voltage_id, u16 *voltage)
> >>> -{
> >>> -	union set_voltage args;
> >>> -	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> >>> -	u8 frev, crev;
> >>> -
> >>> -	if (!amdgpu_atom_parse_cmd_header(adev-
> >>> mode_info.atom_context, index, &frev, &crev))
> >>> -		return -EINVAL;
> >>> -
> >>> -	switch (crev) {
> >>> -	case 1:
> >>> -		return -EINVAL;
> >>> -	case 2:
> >>> -		args.v2.ucVoltageType =
> >> SET_VOLTAGE_GET_MAX_VOLTAGE;
> >>> -		args.v2.ucVoltageMode = 0;
> >>> -		args.v2.usVoltageLevel = 0;
> >>> -
> >>> -		amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> -
> >>> -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> >>> -		break;
> >>> -	case 3:
> >>> -		args.v3.ucVoltageType = voltage_type;
> >>> -		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> >>> -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> >>> -
> >>> -		amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> -
> >>> -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> >>> -		break;
> >>> -	default:
> >>> -		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> >>> -		return -EINVAL;
> >>> -	}
> >>> -
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -int
> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >> amdgpu_device *adev,
> >>> -						      u16 *voltage,
> >>> -						      u16 leakage_idx)
> >>> -{
> >>> -	return amdgpu_atombios_get_max_vddc(adev,
> >> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> >>> -}
> >>> -
> >>> -union voltage_object_info {
> >>> -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> >>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> >>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> >>> -};
> >>> -
> >>> -union voltage_object {
> >>> -	struct _ATOM_VOLTAGE_OBJECT v1;
> >>> -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> >>> -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> >>> -};
> >>> -
> >>> -
> >>> -static ATOM_VOLTAGE_OBJECT_V3
> >>
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> >> NFO_V3_1 *v3,
> >>> -									u8
> >> voltage_type, u8 voltage_mode)
> >>> -{
> >>> -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> >>> -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> >> asVoltageObj[0]);
> >>> -	u8 *start = (u8 *)v3;
> >>> -
> >>> -	while (offset < size) {
> >>> -		ATOM_VOLTAGE_OBJECT_V3 *vo =
> >> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> >>> -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
> >> voltage_type) &&
> >>> -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
> >> voltage_mode))
> >>> -			return vo;
> >>> -		offset += le16_to_cpu(vo-
> >>> asGpioVoltageObj.sHeader.usSize);
> >>> -	}
> >>> -	return NULL;
> >>> -}
> >>> -
> >>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>> -			      u8 voltage_type,
> >>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> >>> -{
> >>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> -	u8 frev, crev;
> >>> -	u16 data_offset, size;
> >>> -	union voltage_object_info *voltage_info;
> >>> -	union voltage_object *voltage_object = NULL;
> >>> -
> >>> -	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> -				   &frev, &crev, &data_offset)) {
> >>> -		voltage_info = (union voltage_object_info *)
> >>> -			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> -
> >>> -		switch (frev) {
> >>> -		case 3:
> >>> -			switch (crev) {
> >>> -			case 1:
> >>> -				voltage_object = (union voltage_object *)
> >>> -
> >> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> -
> >> voltage_type,
> >>> -
> >> VOLTAGE_OBJ_SVID2);
> >>> -				if (voltage_object) {
> >>> -					*svd_gpio_id = voltage_object-
> >>> v3.asSVID2Obj.ucSVDGpioId;
> >>> -					*svc_gpio_id = voltage_object-
> >>> v3.asSVID2Obj.ucSVCGpioId;
> >>> -				} else {
> >>> -					return -EINVAL;
> >>> -				}
> >>> -				break;
> >>> -			default:
> >>> -				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> -				return -EINVAL;
> >>> -			}
> >>> -			break;
> >>> -		default:
> >>> -			DRM_ERROR("unknown voltage object table\n");
> >>> -			return -EINVAL;
> >>> -		}
> >>> -
> >>> -	}
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -bool
> >>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>> -				u8 voltage_type, u8 voltage_mode)
> >>> -{
> >>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> -	u8 frev, crev;
> >>> -	u16 data_offset, size;
> >>> -	union voltage_object_info *voltage_info;
> >>> -
> >>> -	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> -				   &frev, &crev, &data_offset)) {
> >>> -		voltage_info = (union voltage_object_info *)
> >>> -			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> -
> >>> -		switch (frev) {
> >>> -		case 3:
> >>> -			switch (crev) {
> >>> -			case 1:
> >>> -				if
> >> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> -
> >> voltage_type, voltage_mode))
> >>> -					return true;
> >>> -				break;
> >>> -			default:
> >>> -				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> -				return false;
> >>> -			}
> >>> -			break;
> >>> -		default:
> >>> -			DRM_ERROR("unknown voltage object table\n");
> >>> -			return false;
> >>> -		}
> >>> -
> >>> -	}
> >>> -	return false;
> >>> -}
> >>> -
> >>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> >>> -				      u8 voltage_type, u8 voltage_mode,
> >>> -				      struct atom_voltage_table *voltage_table)
> >>> -{
> >>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> -	u8 frev, crev;
> >>> -	u16 data_offset, size;
> >>> -	int i;
> >>> -	union voltage_object_info *voltage_info;
> >>> -	union voltage_object *voltage_object = NULL;
> >>> -
> >>> -	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> -				   &frev, &crev, &data_offset)) {
> >>> -		voltage_info = (union voltage_object_info *)
> >>> -			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> -
> >>> -		switch (frev) {
> >>> -		case 3:
> >>> -			switch (crev) {
> >>> -			case 1:
> >>> -				voltage_object = (union voltage_object *)
> >>> -
> >> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> -
> >> voltage_type, voltage_mode);
> >>> -				if (voltage_object) {
> >>> -					ATOM_GPIO_VOLTAGE_OBJECT_V3
> >> *gpio =
> >>> -						&voltage_object-
> >>> v3.asGpioVoltageObj;
> >>> -					VOLTAGE_LUT_ENTRY_V2 *lut;
> >>> -					if (gpio->ucGpioEntryNum >
> >> MAX_VOLTAGE_ENTRIES)
> >>> -						return -EINVAL;
> >>> -					lut = &gpio->asVolGpioLut[0];
> >>> -					for (i = 0; i < gpio->ucGpioEntryNum;
> >> i++) {
> >>> -						voltage_table-
> >>> entries[i].value =
> >>> -							le16_to_cpu(lut-
> >>> usVoltageValue);
> >>> -						voltage_table-
> >>> entries[i].smio_low =
> >>> -							le32_to_cpu(lut-
> >>> ulVoltageId);
> >>> -						lut =
> >> (VOLTAGE_LUT_ENTRY_V2 *)
> >>> -							((u8 *)lut +
> >> sizeof(VOLTAGE_LUT_ENTRY_V2));
> >>> -					}
> >>> -					voltage_table->mask_low =
> >> le32_to_cpu(gpio->ulGpioMaskVal);
> >>> -					voltage_table->count = gpio-
> >>> ucGpioEntryNum;
> >>> -					voltage_table->phase_delay = gpio-
> >>> ucPhaseDelay;
> >>> -					return 0;
> >>> -				}
> >>> -				break;
> >>> -			default:
> >>> -				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> -				return -EINVAL;
> >>> -			}
> >>> -			break;
> >>> -		default:
> >>> -			DRM_ERROR("unknown voltage object table\n");
> >>> -			return -EINVAL;
> >>> -		}
> >>> -	}
> >>> -	return -EINVAL;
> >>> -}
> >>> -
> >>> -union vram_info {
> >>> -	struct _ATOM_VRAM_INFO_V3 v1_3;
> >>> -	struct _ATOM_VRAM_INFO_V4 v1_4;
> >>> -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> >>> -};
> >>> -
> >>> -#define MEM_ID_MASK           0xff000000
> >>> -#define MEM_ID_SHIFT          24
> >>> -#define CLOCK_RANGE_MASK      0x00ffffff
> >>> -#define CLOCK_RANGE_SHIFT     0
> >>> -#define LOW_NIBBLE_MASK       0xf
> >>> -#define DATA_EQU_PREV         0
> >>> -#define DATA_FROM_TABLE       4
> >>> -
> >>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> >>> -				      u8 module_index,
> >>> -				      struct atom_mc_reg_table *reg_table)
> >>> -{
> >>> -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> >>> -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> >>> -	u32 i = 0, j;
> >>> -	u16 data_offset, size;
> >>> -	union vram_info *vram_info;
> >>> -
> >>> -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> >>> -
> >>> -	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> -				   &frev, &crev, &data_offset)) {
> >>> -		vram_info = (union vram_info *)
> >>> -			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> -		switch (frev) {
> >>> -		case 1:
> >>> -			DRM_ERROR("old table version %d, %d\n", frev,
> >> crev);
> >>> -			return -EINVAL;
> >>> -		case 2:
> >>> -			switch (crev) {
> >>> -			case 1:
> >>> -				if (module_index < vram_info-
> >>> v2_1.ucNumOfVRAMModule) {
> >>> -					ATOM_INIT_REG_BLOCK *reg_block
> >> =
> >>> -						(ATOM_INIT_REG_BLOCK *)
> >>> -						((u8 *)vram_info +
> >> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> >>> -
> >> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> >>> -
> >> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>> -						((u8 *)reg_block + (2 *
> >> sizeof(u16)) +
> >>> -						 le16_to_cpu(reg_block-
> >>> usRegIndexTblSize));
> >>> -					ATOM_INIT_REG_INDEX_FORMAT
> >> *format = &reg_block->asRegIndexBuf[0];
> >>> -					num_entries =
> >> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> >>> -
> >> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> >>> -					if (num_entries >
> >> VBIOS_MC_REGISTER_ARRAY_SIZE)
> >>> -						return -EINVAL;
> >>> -					while (i < num_entries) {
> >>> -						if (format-
> >>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
> >>> -							break;
> >>> -						reg_table-
> >>> mc_reg_address[i].s1 =
> >>> -
> >> 	(u16)(le16_to_cpu(format->usRegIndex));
> >>> -						reg_table-
> >>> mc_reg_address[i].pre_reg_data =
> >>> -							(u8)(format-
> >>> ucPreRegDataLength);
> >>> -						i++;
> >>> -						format =
> >> (ATOM_INIT_REG_INDEX_FORMAT *)
> >>> -							((u8 *)format +
> >> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> >>> -					}
> >>> -					reg_table->last = i;
> >>> -					while ((le32_to_cpu(*(u32
> >> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> >>> -					       (num_ranges <
> >> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> >>> -						t_mem_id =
> >> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> >>> -								>>
> >> MEM_ID_SHIFT);
> >>> -						if (module_index ==
> >> t_mem_id) {
> >>> -							reg_table-
> >>> mc_reg_table_entry[num_ranges].mclk_max =
> >>> -
> >> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> >>> -								      >>
> >> CLOCK_RANGE_SHIFT);
> >>> -							for (i = 0, j = 1; i <
> >> reg_table->last; i++) {
> >>> -								if ((reg_table-
> >>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >> DATA_FROM_TABLE) {
> >>> -
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>> -
> >> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> >>> -									j++;
> >>> -								} else if
> >> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >> DATA_EQU_PREV) {
> >>> -
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>> -
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> >>> -								}
> >>> -							}
> >>> -							num_ranges++;
> >>> -						}
> >>> -						reg_data =
> >> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>> -							((u8 *)reg_data +
> >> le16_to_cpu(reg_block->usRegDataBlkSize));
> >>> -					}
> >>> -					if (le32_to_cpu(*(u32 *)reg_data) !=
> >> END_OF_REG_DATA_BLOCK)
> >>> -						return -EINVAL;
> >>> -					reg_table->num_entries =
> >> num_ranges;
> >>> -				} else
> >>> -					return -EINVAL;
> >>> -				break;
> >>> -			default:
> >>> -				DRM_ERROR("Unknown table
> >> version %d, %d\n", frev, crev);
> >>> -				return -EINVAL;
> >>> -			}
> >>> -			break;
> >>> -		default:
> >>> -			DRM_ERROR("Unknown table version %d, %d\n",
> >> frev, crev);
> >>> -			return -EINVAL;
> >>> -		}
> >>> -		return 0;
> >>> -	}
> >>> -	return -EINVAL;
> >>> -}
> >>> -
> >>>    bool amdgpu_atombios_has_gpu_virtualization_table(struct
> >> amdgpu_device *adev)
> >>>    {
> >>>    	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>> index 27e74b1fc260..cb5649298dcb 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>> @@ -160,26 +160,6 @@ int
> amdgpu_atombios_get_clock_dividers(struct
> >> amdgpu_device *adev,
> >>>    				       bool strobe_mode,
> >>>    				       struct atom_clock_dividers *dividers);
> >>>
> >>> -int amdgpu_atombios_get_memory_pll_dividers(struct
> amdgpu_device
> >> *adev,
> >>> -					    u32 clock,
> >>> -					    bool strobe_mode,
> >>> -					    struct atom_mpll_param
> >> *mpll_param);
> >>> -
> >>> -void amdgpu_atombios_set_engine_dram_timings(struct
> amdgpu_device
> >> *adev,
> >>> -					     u32 eng_clock, u32 mem_clock);
> >>> -
> >>> -bool
> >>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>> -				u8 voltage_type, u8 voltage_mode);
> >>> -
> >>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
> >>> -				      u8 voltage_type, u8 voltage_mode,
> >>> -				      struct atom_voltage_table
> >> *voltage_table);
> >>> -
> >>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
> >>> -				      u8 module_index,
> >>> -				      struct atom_mc_reg_table *reg_table);
> >>> -
> >>>    bool amdgpu_atombios_has_gpu_virtualization_table(struct
> >> amdgpu_device *adev);
> >>>
> >>>    void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device
> *adev,
> >> bool lock);
> >>> @@ -190,21 +170,11 @@ void
> >> amdgpu_atombios_scratch_regs_set_backlight_level(struct
> amdgpu_device
> >> *adev
> >>>    bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device
> >> *adev);
> >>>
> >>>    void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes,
> bool
> >> to_le);
> >>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
> >> voltage_type,
> >>> -			     u16 voltage_id, u16 *voltage);
> >>> -int
> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >> amdgpu_device *adev,
> >>> -						      u16 *voltage,
> >>> -						      u16 leakage_idx);
> >>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> >> *adev,
> >>> -					  u16 *vddc, u16 *vddci, u16 *mvdd);
> >>>    int amdgpu_atombios_get_clock_dividers(struct amdgpu_device
> *adev,
> >>>    				       u8 clock_type,
> >>>    				       u32 clock,
> >>>    				       bool strobe_mode,
> >>>    				       struct atom_clock_dividers *dividers);
> >>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>> -			      u8 voltage_type,
> >>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> >>>
> >>>    int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
> >>>    				   uint32_t table,
> >>
> >>
> >> Whether used in legacy or new logic, atombios table parsing/execution
> >> should be kept as separate logic. These shouldn't be moved along with
> dpm.
> > [Quan, Evan] Are you suggesting another place holder for those atombios
> APIs? Like legacy_atombios.c?
> 
> What I meant is no need to move them, keep it in the same file. We also
> have atomfirmware, splitting this and adding another legacy_atombios is
> not required.
[Quan, Evan] Hmm, that seems contrary to Alex' suggestions.
Although I'm fine with either. I kind of prefer Alex's suggestions.
That is if they are destined to be dropped(together with SI/KV support), we should get them separated now.

BR
Evan
> 
> >>
> >>
> >>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> index 2e295facd086..cdf724dcf832 100644
> >>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> @@ -404,6 +404,7 @@ struct amd_pm_funcs {
> >>>    	int (*get_dpm_clock_table)(void *handle,
> >>>    				   struct dpm_clocks *clock_table);
> >>>    	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t
> >> *size);
> >>> +	int (*change_power_state)(void *handle);
> >>>    };
> >>>
> >>>    struct metrics_table_header {
> >>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> index ecaf0081bc31..c6801d10cde6 100644
> >>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> @@ -34,113 +34,9 @@
> >>>
> >>>    #define WIDTH_4K 3840
> >>>
> >>> -#define amdgpu_dpm_pre_set_power_state(adev) \
> >>> -		((adev)->powerplay.pp_funcs-
> >>> pre_set_power_state((adev)->powerplay.pp_handle))
> >>> -
> >>> -#define amdgpu_dpm_post_set_power_state(adev) \
> >>> -		((adev)->powerplay.pp_funcs-
> >>> post_set_power_state((adev)->powerplay.pp_handle))
> >>> -
> >>> -#define amdgpu_dpm_display_configuration_changed(adev) \
> >>> -		((adev)->powerplay.pp_funcs-
> >>> display_configuration_changed((adev)->powerplay.pp_handle))
> >>> -
> >>> -#define amdgpu_dpm_print_power_state(adev, ps) \
> >>> -		((adev)->powerplay.pp_funcs->print_power_state((adev)-
> >>> powerplay.pp_handle, (ps)))
> >>> -
> >>> -#define amdgpu_dpm_vblank_too_short(adev) \
> >>> -		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
> >>> powerplay.pp_handle))
> >>> -
> >>>    #define amdgpu_dpm_enable_bapm(adev, e) \
> >>>    		((adev)->powerplay.pp_funcs->enable_bapm((adev)-
> >>> powerplay.pp_handle, (e)))
> >>>
> >>> -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> >>> -		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
> >>> powerplay.pp_handle, (cps), (rps), (equal)))
> >>> -
> >>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> >>> -{
> >>> -	const char *s;
> >>> -
> >>> -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> >>> -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> >>> -	default:
> >>> -		s = "none";
> >>> -		break;
> >>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> >>> -		s = "battery";
> >>> -		break;
> >>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> >>> -		s = "balanced";
> >>> -		break;
> >>> -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> >>> -		s = "performance";
> >>> -		break;
> >>> -	}
> >>> -	printk("\tui class: %s\n", s);
> >>> -	printk("\tinternal class:");
> >>> -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> >>> -	    (class2 == 0))
> >>> -		pr_cont(" none");
> >>> -	else {
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> >>> -			pr_cont(" boot");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>> -			pr_cont(" thermal");
> >>> -		if (class &
> >> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> >>> -			pr_cont(" limited_pwr");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> >>> -			pr_cont(" rest");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> >>> -			pr_cont(" forced");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>> -			pr_cont(" 3d_perf");
> >>> -		if (class &
> >> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> >>> -			pr_cont(" ovrdrv");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> >>> -			pr_cont(" uvd");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> >>> -			pr_cont(" 3d_low");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>> -			pr_cont(" acpi");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>> -			pr_cont(" uvd_hd2");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>> -			pr_cont(" uvd_hd");
> >>> -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>> -			pr_cont(" uvd_sd");
> >>> -		if (class2 &
> >> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> >>> -			pr_cont(" limited_pwr2");
> >>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>> -			pr_cont(" ulv");
> >>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>> -			pr_cont(" uvd_mvc");
> >>> -	}
> >>> -	pr_cont("\n");
> >>> -}
> >>> -
> >>> -void amdgpu_dpm_print_cap_info(u32 caps)
> >>> -{
> >>> -	printk("\tcaps:");
> >>> -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> >>> -		pr_cont(" single_disp");
> >>> -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> >>> -		pr_cont(" video");
> >>> -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> >>> -		pr_cont(" no_dc");
> >>> -	pr_cont("\n");
> >>> -}
> >>> -
> >>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>> -				struct amdgpu_ps *rps)
> >>> -{
> >>> -	printk("\tstatus:");
> >>> -	if (rps == adev->pm.dpm.current_ps)
> >>> -		pr_cont(" c");
> >>> -	if (rps == adev->pm.dpm.requested_ps)
> >>> -		pr_cont(" r");
> >>> -	if (rps == adev->pm.dpm.boot_ps)
> >>> -		pr_cont(" b");
> >>> -	pr_cont("\n");
> >>> -}
> >>> -
> >>>    static void amdgpu_dpm_get_active_displays(struct amdgpu_device
> >> *adev)
> >>>    {
> >>>    	struct drm_device *ddev = adev_to_drm(adev);
> >>> @@ -161,7 +57,6 @@ static void
> amdgpu_dpm_get_active_displays(struct
> >> amdgpu_device *adev)
> >>>    	}
> >>>    }
> >>>
> >>> -
> >>>    u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
> >>>    {
> >>>    	struct drm_device *dev = adev_to_drm(adev);
> >>> @@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct
> >> amdgpu_device *adev)
> >>>    	return vrefresh;
> >>>    }
> >>>
> >>> -union power_info {
> >>> -	struct _ATOM_POWERPLAY_INFO info;
> >>> -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> >>> -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> >>> -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> >>> -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> >>> -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> >>> -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> >>> -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> >>> -};
> >>> -
> >>> -union fan_info {
> >>> -	struct _ATOM_PPLIB_FANTABLE fan;
> >>> -	struct _ATOM_PPLIB_FANTABLE2 fan2;
> >>> -	struct _ATOM_PPLIB_FANTABLE3 fan3;
> >>> -};
> >>> -
> >>> -static int amdgpu_parse_clk_voltage_dep_table(struct
> >> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> >>> -
> >> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> >>> -{
> >>> -	u32 size = atom_table->ucNumEntries *
> >>> -		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> >>> -	int i;
> >>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> >>> -
> >>> -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> >>> -	if (!amdgpu_table->entries)
> >>> -		return -ENOMEM;
> >>> -
> >>> -	entry = &atom_table->entries[0];
> >>> -	for (i = 0; i < atom_table->ucNumEntries; i++) {
> >>> -		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >>> usClockLow) |
> >>> -			(entry->ucClockHigh << 16);
> >>> -		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >>> usVoltage);
> >>> -		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> >> *)
> >>> -			((u8 *)entry +
> >> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> >>> -	}
> >>> -	amdgpu_table->count = atom_table->ucNumEntries;
> >>> -
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> >>> -{
> >>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> -	union power_info *power_info;
> >>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> -	u16 data_offset;
> >>> -	u8 frev, crev;
> >>> -
> >>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> -				   &frev, &crev, &data_offset))
> >>> -		return -EINVAL;
> >>> -	power_info = (union power_info *)(mode_info->atom_context-
> >>> bios + data_offset);
> >>> -
> >>> -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >>> pplib.ulPlatformCaps);
> >>> -	adev->pm.dpm.backbias_response_time =
> >> le16_to_cpu(power_info->pplib.usBackbiasTime);
> >>> -	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
> >>> pplib.usVoltageTime);
> >>> -
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> >>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> >>> -
> >>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
> *adev)
> >>> -{
> >>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> -	union power_info *power_info;
> >>> -	union fan_info *fan_info;
> >>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> >>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> -	u16 data_offset;
> >>> -	u8 frev, crev;
> >>> -	int ret, i;
> >>> -
> >>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> -				   &frev, &crev, &data_offset))
> >>> -		return -EINVAL;
> >>> -	power_info = (union power_info *)(mode_info->atom_context-
> >>> bios + data_offset);
> >>> -
> >>> -	/* fan table */
> >>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>> -		if (power_info->pplib3.usFanTableOffset) {
> >>> -			fan_info = (union fan_info *)(mode_info-
> >>> atom_context->bios + data_offset +
> >>> -						      le16_to_cpu(power_info-
> >>> pplib3.usFanTableOffset));
> >>> -			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> >>> -			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
> >>> fan.usTMin);
> >>> -			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
> >>> fan.usTMed);
> >>> -			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
> >>> fan.usTHigh);
> >>> -			adev->pm.dpm.fan.pwm_min =
> >> le16_to_cpu(fan_info->fan.usPWMMin);
> >>> -			adev->pm.dpm.fan.pwm_med =
> >> le16_to_cpu(fan_info->fan.usPWMMed);
> >>> -			adev->pm.dpm.fan.pwm_high =
> >> le16_to_cpu(fan_info->fan.usPWMHigh);
> >>> -			if (fan_info->fan.ucFanTableFormat >= 2)
> >>> -				adev->pm.dpm.fan.t_max =
> >> le16_to_cpu(fan_info->fan2.usTMax);
> >>> -			else
> >>> -				adev->pm.dpm.fan.t_max = 10900;
> >>> -			adev->pm.dpm.fan.cycle_delay = 100000;
> >>> -			if (fan_info->fan.ucFanTableFormat >= 3) {
> >>> -				adev->pm.dpm.fan.control_mode =
> >> fan_info->fan3.ucFanControlMode;
> >>> -				adev->pm.dpm.fan.default_max_fan_pwm
> >> =
> >>> -					le16_to_cpu(fan_info-
> >>> fan3.usFanPWMMax);
> >>> -				adev-
> >>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
> >>> -				adev->pm.dpm.fan.fan_output_sensitivity =
> >>> -					le16_to_cpu(fan_info-
> >>> fan3.usFanOutputSensitivity);
> >>> -			}
> >>> -			adev->pm.dpm.fan.ucode_fan_control = true;
> >>> -		}
> >>> -	}
> >>> -
> >>> -	/* clock dependancy tables, shedding tables */
> >>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> >>> -		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> >>> -			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usVddcDependencyOnSCLKOffset));
> >>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
> >>> -								 dep_table);
> >>> -			if (ret) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return ret;
> >>> -			}
> >>> -		}
> >>> -		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> >>> -			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usVddciDependencyOnMCLKOffset));
> >>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
> >>> -								 dep_table);
> >>> -			if (ret) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return ret;
> >>> -			}
> >>> -		}
> >>> -		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> >>> -			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usVddcDependencyOnMCLKOffset));
> >>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
> >>> -								 dep_table);
> >>> -			if (ret) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return ret;
> >>> -			}
> >>> -		}
> >>> -		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
> >> {
> >>> -			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usMvddDependencyOnMCLKOffset));
> >>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> >>> -								 dep_table);
> >>> -			if (ret) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return ret;
> >>> -			}
> >>> -		}
> >>> -		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> >>> -			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> >>> -				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usMaxClockVoltageOnDCOffset));
> >>> -			if (clk_v->ucNumEntries) {
> >>> -				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> >>> -					le16_to_cpu(clk_v-
> >>> entries[0].usSclkLow) |
> >>> -					(clk_v->entries[0].ucSclkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> >>> -					le16_to_cpu(clk_v-
> >>> entries[0].usMclkLow) |
> >>> -					(clk_v->entries[0].ucMclkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> >>> -					le16_to_cpu(clk_v-
> >>> entries[0].usVddc);
> >>> -				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> >>> -					le16_to_cpu(clk_v-
> >>> entries[0].usVddci);
> >>> -			}
> >>> -		}
> >>> -		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
> >> {
> >>> -			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> >>> -				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib4.usVddcPhaseShedLimitsTableOffset));
> >>> -			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> >>> -
> >>> -			adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> >>> -				kcalloc(psl->ucNumEntries,
> >>> -					sizeof(struct
> >> amdgpu_phase_shedding_limits_entry),
> >>> -					GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -
> >>> -			entry = &psl->entries[0];
> >>> -			for (i = 0; i < psl->ucNumEntries; i++) {
> >>> -				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> >>> -					le16_to_cpu(entry->usSclkLow) |
> >> (entry->ucSclkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> >>> -					le16_to_cpu(entry->usMclkLow) |
> >> (entry->ucMclkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> >>> -					le16_to_cpu(entry->usVoltage);
> >>> -				entry =
> >> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> >>> -			}
> >>> -			adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
> >>> -				psl->ucNumEntries;
> >>> -		}
> >>> -	}
> >>> -
> >>> -	/* cac data */
> >>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> >>> -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >>> pplib5.ulTDPLimit);
> >>> -		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
> >>> pplib5.ulNearTDPLimit);
> >>> -		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >>> pm.dpm.near_tdp_limit;
> >>> -		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
> >>> pplib5.usTDPODLimit);
> >>> -		if (adev->pm.dpm.tdp_od_limit)
> >>> -			adev->pm.dpm.power_control = true;
> >>> -		else
> >>> -			adev->pm.dpm.power_control = false;
> >>> -		adev->pm.dpm.tdp_adjustment = 0;
> >>> -		adev->pm.dpm.sq_ramping_threshold =
> >> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> >>> -		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
> >>> pplib5.ulCACLeakage);
> >>> -		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
> >>> pplib5.usLoadLineSlope);
> >>> -		if (power_info->pplib5.usCACLeakageTableOffset) {
> >>> -			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> >>> -				(ATOM_PPLIB_CAC_Leakage_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(power_info-
> >>> pplib5.usCACLeakageTableOffset));
> >>> -			ATOM_PPLIB_CAC_Leakage_Record *entry;
> >>> -			u32 size = cac_table->ucNumEntries * sizeof(struct
> >> amdgpu_cac_leakage_table);
> >>> -			adev->pm.dpm.dyn_state.cac_leakage_table.entries
> >> = kzalloc(size, GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			entry = &cac_table->entries[0];
> >>> -			for (i = 0; i < cac_table->ucNumEntries; i++) {
> >>> -				if (adev->pm.dpm.platform_caps &
> >> ATOM_PP_PLATFORM_CAP_EVV) {
> >>> -					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> >>> -						le16_to_cpu(entry-
> >>> usVddc1);
> >>> -					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> >>> -						le16_to_cpu(entry-
> >>> usVddc2);
> >>> -					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> >>> -						le16_to_cpu(entry-
> >>> usVddc3);
> >>> -				} else {
> >>> -					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> >>> -						le16_to_cpu(entry->usVddc);
> >>> -					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> >>> -						le32_to_cpu(entry-
> >>> ulLeakageValue);
> >>> -				}
> >>> -				entry = (ATOM_PPLIB_CAC_Leakage_Record
> >> *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> >>> -			}
> >>> -			adev->pm.dpm.dyn_state.cac_leakage_table.count
> >> = cac_table->ucNumEntries;
> >>> -		}
> >>> -	}
> >>> -
> >>> -	/* ext tables */
> >>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>> -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> >> (ATOM_PPLIB_EXTENDEDHEADER *)
> >>> -			(mode_info->atom_context->bios + data_offset +
> >>> -			 le16_to_cpu(power_info-
> >>> pplib3.usExtendendedHeaderOffset));
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> >>> -			ext_hdr->usVCETableOffset) {
> >>> -			VCEClockInfoArray *array = (VCEClockInfoArray *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1);
> >>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> >> *limits =
> >>> -
> >> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1 +
> >>> -				 1 + array->ucNumEntries *
> >> sizeof(VCEClockInfo));
> >>> -			ATOM_PPLIB_VCE_State_Table *states =
> >>> -				(ATOM_PPLIB_VCE_State_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1 +
> >>> -				 1 + (array->ucNumEntries * sizeof
> >> (VCEClockInfo)) +
> >>> -				 1 + (limits->numEntries *
> >> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> >>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> >> *entry;
> >>> -			ATOM_PPLIB_VCE_State_Record *state_entry;
> >>> -			VCEClockInfo *vce_clk;
> >>> -			u32 size = limits->numEntries *
> >>> -				sizeof(struct
> >> amdgpu_vce_clock_voltage_dependency_entry);
> >>> -			adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> >>> -				kzalloc(size, GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> >>> -				limits->numEntries;
> >>> -			entry = &limits->entries[0];
> >>> -			state_entry = &states->entries[0];
> >>> -			for (i = 0; i < limits->numEntries; i++) {
> >>> -				vce_clk = (VCEClockInfo *)
> >>> -					((u8 *)&array->entries[0] +
> >>> -					 (entry->ucVCEClockInfoIndex *
> >> sizeof(VCEClockInfo)));
> >>> -				adev-
> >>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> >> =
> >>> -					le16_to_cpu(vce_clk->usEVClkLow) |
> >> (vce_clk->ucEVClkHigh << 16);
> >>> -				adev-
> >>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> >> =
> >>> -					le16_to_cpu(vce_clk->usECClkLow) |
> >> (vce_clk->ucECClkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> >>> -					le16_to_cpu(entry->usVoltage);
> >>> -				entry =
> >> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> >>> -			}
> >>> -			adev->pm.dpm.num_of_vce_states =
> >>> -					states->numEntries >
> >> AMD_MAX_VCE_LEVELS ?
> >>> -					AMD_MAX_VCE_LEVELS : states-
> >>> numEntries;
> >>> -			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
> >> {
> >>> -				vce_clk = (VCEClockInfo *)
> >>> -					((u8 *)&array->entries[0] +
> >>> -					 (state_entry->ucVCEClockInfoIndex
> >> * sizeof(VCEClockInfo)));
> >>> -				adev->pm.dpm.vce_states[i].evclk =
> >>> -					le16_to_cpu(vce_clk->usEVClkLow) |
> >> (vce_clk->ucEVClkHigh << 16);
> >>> -				adev->pm.dpm.vce_states[i].ecclk =
> >>> -					le16_to_cpu(vce_clk->usECClkLow) |
> >> (vce_clk->ucECClkHigh << 16);
> >>> -				adev->pm.dpm.vce_states[i].clk_idx =
> >>> -					state_entry->ucClockInfoIndex &
> >> 0x3f;
> >>> -				adev->pm.dpm.vce_states[i].pstate =
> >>> -					(state_entry->ucClockInfoIndex &
> >> 0xc0) >> 6;
> >>> -				state_entry =
> >> (ATOM_PPLIB_VCE_State_Record *)
> >>> -					((u8 *)state_entry +
> >> sizeof(ATOM_PPLIB_VCE_State_Record));
> >>> -			}
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> >>> -			ext_hdr->usUVDTableOffset) {
> >>> -			UVDClockInfoArray *array = (UVDClockInfoArray *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> >> 1);
> >>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> >> *limits =
> >>> -
> >> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> >> 1 +
> >>> -				 1 + (array->ucNumEntries * sizeof
> >> (UVDClockInfo)));
> >>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> >> *entry;
> >>> -			u32 size = limits->numEntries *
> >>> -				sizeof(struct
> >> amdgpu_uvd_clock_voltage_dependency_entry);
> >>> -			adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> >>> -				kzalloc(size, GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> >>> -				limits->numEntries;
> >>> -			entry = &limits->entries[0];
> >>> -			for (i = 0; i < limits->numEntries; i++) {
> >>> -				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> >>> -					((u8 *)&array->entries[0] +
> >>> -					 (entry->ucUVDClockInfoIndex *
> >> sizeof(UVDClockInfo)));
> >>> -				adev-
> >>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> >>> -					le16_to_cpu(uvd_clk->usVClkLow) |
> >> (uvd_clk->ucVClkHigh << 16);
> >>> -				adev-
> >>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> >>> -					le16_to_cpu(uvd_clk->usDClkLow) |
> >> (uvd_clk->ucDClkHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> >>> -					le16_to_cpu(entry->usVoltage);
> >>> -				entry =
> >> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> >>> -			}
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> >>> -			ext_hdr->usSAMUTableOffset) {
> >>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> >>> -				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> >> *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
> >> + 1);
> >>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> >>> -			u32 size = limits->numEntries *
> >>> -				sizeof(struct
> >> amdgpu_clock_voltage_dependency_entry);
> >>> -			adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> >>> -				kzalloc(size, GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> >>> -				limits->numEntries;
> >>> -			entry = &limits->entries[0];
> >>> -			for (i = 0; i < limits->numEntries; i++) {
> >>> -				adev-
> >>>
> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> >>> -					le16_to_cpu(entry->usSAMClockLow)
> >> | (entry->ucSAMClockHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
> =
> >>> -					le16_to_cpu(entry->usVoltage);
> >>> -				entry =
> >> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> >>> -			}
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> >>> -		    ext_hdr->usPPMTableOffset) {
> >>> -			ATOM_PPLIB_PPM_Table *ppm =
> >> (ATOM_PPLIB_PPM_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> >>> -			adev->pm.dpm.dyn_state.ppm_table =
> >>> -				kzalloc(sizeof(struct amdgpu_ppm_table),
> >> GFP_KERNEL);
> >>> -			if (!adev->pm.dpm.dyn_state.ppm_table) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			adev->pm.dpm.dyn_state.ppm_table->ppm_design
> >> = ppm->ucPpmDesign;
> >>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>> cpu_core_number =
> >>> -				le16_to_cpu(ppm->usCpuCoreNumber);
> >>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>> platform_tdp =
> >>> -				le32_to_cpu(ppm->ulPlatformTDP);
> >>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>> small_ac_platform_tdp =
> >>> -				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> >>> -			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
> >> =
> >>> -				le32_to_cpu(ppm->ulPlatformTDC);
> >>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>> small_ac_platform_tdc =
> >>> -				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> >>> -			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> >>> -				le32_to_cpu(ppm->ulApuTDP);
> >>> -			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> >>> -				le32_to_cpu(ppm->ulDGpuTDP);
> >>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>> dgpu_ulv_power =
> >>> -				le32_to_cpu(ppm->ulDGpuUlvPower);
> >>> -			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> >>> -				le32_to_cpu(ppm->ulTjmax);
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> >>> -			ext_hdr->usACPTableOffset) {
> >>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> >>> -				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> >> *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr->usACPTableOffset) +
> >> 1);
> >>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> >>> -			u32 size = limits->numEntries *
> >>> -				sizeof(struct
> >> amdgpu_clock_voltage_dependency_entry);
> >>> -			adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> >>> -				kzalloc(size, GFP_KERNEL);
> >>> -			if (!adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> >>> -				limits->numEntries;
> >>> -			entry = &limits->entries[0];
> >>> -			for (i = 0; i < limits->numEntries; i++) {
> >>> -				adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
> =
> >>> -					le16_to_cpu(entry->usACPClockLow)
> >> | (entry->ucACPClockHigh << 16);
> >>> -				adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> >>> -					le16_to_cpu(entry->usVoltage);
> >>> -				entry =
> >> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> >>> -					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> >>> -			}
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> >>> -			ext_hdr->usPowerTuneTableOffset) {
> >>> -			u8 rev = *(u8 *)(mode_info->atom_context->bios +
> >> data_offset +
> >>> -					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> -			ATOM_PowerTune_Table *pt;
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table =
> >>> -				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
> >> GFP_KERNEL);
> >>> -			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> >>> -
> >> 	amdgpu_free_extended_power_table(adev);
> >>> -				return -ENOMEM;
> >>> -			}
> >>> -			if (rev > 0) {
> >>> -				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> >> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> >>> -					(mode_info->atom_context->bios +
> >> data_offset +
> >>> -					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> maximum_power_delivery_limit =
> >>> -					ppt->usMaximumPowerDeliveryLimit;
> >>> -				pt = &ppt->power_tune_table;
> >>> -			} else {
> >>> -				ATOM_PPLIB_POWERTUNE_Table *ppt =
> >> (ATOM_PPLIB_POWERTUNE_Table *)
> >>> -					(mode_info->atom_context->bios +
> >> data_offset +
> >>> -					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> maximum_power_delivery_limit = 255;
> >>> -				pt = &ppt->power_tune_table;
> >>> -			}
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
> >> le16_to_cpu(pt->usTDP);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> configurable_tdp =
> >>> -				le16_to_cpu(pt->usConfigurableTDP);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
> >> le16_to_cpu(pt->usTDC);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> battery_power_limit =
> >>> -				le16_to_cpu(pt->usBatteryPowerLimit);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> small_power_limit =
> >>> -				le16_to_cpu(pt->usSmallPowerLimit);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> low_cac_leakage =
> >>> -				le16_to_cpu(pt->usLowCACLeakage);
> >>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> high_cac_leakage =
> >>> -				le16_to_cpu(pt->usHighCACLeakage);
> >>> -		}
> >>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> >>> -				ext_hdr->usSclkVddgfxTableOffset) {
> >>> -			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> -				(mode_info->atom_context->bios +
> >> data_offset +
> >>> -				 le16_to_cpu(ext_hdr-
> >>> usSclkVddgfxTableOffset));
> >>> -			ret = amdgpu_parse_clk_voltage_dep_table(
> >>> -					&adev-
> >>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> >>> -					dep_table);
> >>> -			if (ret) {
> >>> -				kfree(adev-
> >>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> >>> -				return ret;
> >>> -			}
> >>> -		}
> >>> -	}
> >>> -
> >>> -	return 0;
> >>> -}
> >>> -
> >>> -void amdgpu_free_extended_power_table(struct amdgpu_device
> *adev)
> >>> -{
> >>> -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >>> pm.dpm.dyn_state;
> >>> -
> >>> -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> >>> -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> >>> -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> >>> -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> >>> -	kfree(dyn_state->cac_leakage_table.entries);
> >>> -	kfree(dyn_state->phase_shedding_limits_table.entries);
> >>> -	kfree(dyn_state->ppm_table);
> >>> -	kfree(dyn_state->cac_tdp_table);
> >>> -	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> >>> -	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> >>> -	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> >>> -	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> >>> -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> >>> -}
> >>> -
> >>> -static const char *pp_lib_thermal_controller_names[] = {
> >>> -	"NONE",
> >>> -	"lm63",
> >>> -	"adm1032",
> >>> -	"adm1030",
> >>> -	"max6649",
> >>> -	"lm64",
> >>> -	"f75375",
> >>> -	"RV6xx",
> >>> -	"RV770",
> >>> -	"adt7473",
> >>> -	"NONE",
> >>> -	"External GPIO",
> >>> -	"Evergreen",
> >>> -	"emc2103",
> >>> -	"Sumo",
> >>> -	"Northern Islands",
> >>> -	"Southern Islands",
> >>> -	"lm96163",
> >>> -	"Sea Islands",
> >>> -	"Kaveri/Kabini",
> >>> -};
> >>> -
> >>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> >>> -{
> >>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> >>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> -	ATOM_PPLIB_THERMALCONTROLLER *controller;
> >>> -	struct amdgpu_i2c_bus_rec i2c_bus;
> >>> -	u16 data_offset;
> >>> -	u8 frev, crev;
> >>> -
> >>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> -				   &frev, &crev, &data_offset))
> >>> -		return;
> >>> -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> >>> -		(mode_info->atom_context->bios + data_offset);
> >>> -	controller = &power_table->sThermalController;
> >>> -
> >>> -	/* add the i2c bus for thermal/fan chip */
> >>> -	if (controller->ucType > 0) {
> >>> -		if (controller->ucFanParameters &
> >> ATOM_PP_FANPARAMETERS_NOFAN)
> >>> -			adev->pm.no_fan = true;
> >>> -		adev->pm.fan_pulses_per_revolution =
> >>> -			controller->ucFanParameters &
> >>
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> >> ASK;
> >>> -		if (adev->pm.fan_pulses_per_revolution) {
> >>> -			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> >>> -			adev->pm.fan_max_rpm = controller-
> >>> ucFanMaxRPM;
> >>> -		}
> >>> -		if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_RV6XX;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_RV770) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_RV770;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EVERGREEN;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_SUMO) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_SUMO;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> >>> -			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> >>> -		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> >>> -			DRM_INFO("External GPIO thermal controller %s fan
> >> control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EXTERNAL_GPIO;
> >>> -		} else if (controller->ucType ==
> >>> -
> >> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> >>> -			DRM_INFO("ADT7473 with internal thermal
> >> controller %s fan control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> >>> -		} else if (controller->ucType ==
> >>> -
> >> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> >>> -			DRM_INFO("EMC2103 with internal thermal
> >> controller %s fan control\n",
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> >>> -		} else if (controller->ucType <
> >> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> >>> -			DRM_INFO("Possible %s thermal controller at
> >> 0x%02x %s fan control\n",
> >>> -
> >> pp_lib_thermal_controller_names[controller->ucType],
> >>> -				 controller->ucI2cAddress >> 1,
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EXTERNAL;
> >>> -			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
> >> controller->ucI2cLine);
> >>> -			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
> >> &i2c_bus);
> >>> -			if (adev->pm.i2c_bus) {
> >>> -				struct i2c_board_info info = { };
> >>> -				const char *name =
> >> pp_lib_thermal_controller_names[controller->ucType];
> >>> -				info.addr = controller->ucI2cAddress >> 1;
> >>> -				strlcpy(info.type, name, sizeof(info.type));
> >>> -				i2c_new_client_device(&adev->pm.i2c_bus-
> >>> adapter, &info);
> >>> -			}
> >>> -		} else {
> >>> -			DRM_INFO("Unknown thermal controller type %d at
> >> 0x%02x %s fan control\n",
> >>> -				 controller->ucType,
> >>> -				 controller->ucI2cAddress >> 1,
> >>> -				 (controller->ucFanParameters &
> >>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> -		}
> >>> -	}
> >>> -}
> >>> -
> >>> -struct amd_vce_state*
> >>> -amdgpu_get_vce_clock_state(void *handle, u32 idx)
> >>> -{
> >>> -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -
> >>> -	if (idx < adev->pm.dpm.num_of_vce_states)
> >>> -		return &adev->pm.dpm.vce_states[idx];
> >>> -
> >>> -	return NULL;
> >>> -}
> >>> -
> >>>    int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
> >>>    {
> >>>    	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> >>> @@ -1243,211 +465,6 @@ void
> >> amdgpu_dpm_thermal_work_handler(struct work_struct *work)
> >>>    	amdgpu_pm_compute_clocks(adev);
> >>>    }
> >>>
> >>> -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> >> amdgpu_device *adev,
> >>> -						     enum
> >> amd_pm_state_type dpm_state)
> >>> -{
> >>> -	int i;
> >>> -	struct amdgpu_ps *ps;
> >>> -	u32 ui_class;
> >>> -	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> >>> -		true : false;
> >>> -
> >>> -	/* check if the vblank period is too short to adjust the mclk */
> >>> -	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
> >> {
> >>> -		if (amdgpu_dpm_vblank_too_short(adev))
> >>> -			single_display = false;
> >>> -	}
> >>> -
> >>> -	/* certain older asics have a separare 3D performance state,
> >>> -	 * so try that first if the user selected performance
> >>> -	 */
> >>> -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> >>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> >>> -	/* balanced states don't exist at the moment */
> >>> -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> >>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> -
> >>> -restart_search:
> >>> -	/* Pick the best power state based on current conditions */
> >>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> >>> -		ps = &adev->pm.dpm.ps[i];
> >>> -		ui_class = ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> >>> -		switch (dpm_state) {
> >>> -		/* user states */
> >>> -		case POWER_STATE_TYPE_BATTERY:
> >>> -			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> >>> -				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> -					if (single_display)
> >>> -						return ps;
> >>> -				} else
> >>> -					return ps;
> >>> -			}
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_BALANCED:
> >>> -			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> >>> -				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> -					if (single_display)
> >>> -						return ps;
> >>> -				} else
> >>> -					return ps;
> >>> -			}
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_PERFORMANCE:
> >>> -			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> >>> -				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> -					if (single_display)
> >>> -						return ps;
> >>> -				} else
> >>> -					return ps;
> >>> -			}
> >>> -			break;
> >>> -		/* internal states */
> >>> -		case POWER_STATE_TYPE_INTERNAL_UVD:
> >>> -			if (adev->pm.dpm.uvd_ps)
> >>> -				return adev->pm.dpm.uvd_ps;
> >>> -			else
> >>> -				break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>> -			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>> -			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>> -			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>> -			if (ps->class2 &
> >> ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_BOOT:
> >>> -			return adev->pm.dpm.boot_ps;
> >>> -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>> -			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_ULV:
> >>> -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>> -				return ps;
> >>> -			break;
> >>> -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>> -			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>> -				return ps;
> >>> -			break;
> >>> -		default:
> >>> -			break;
> >>> -		}
> >>> -	}
> >>> -	/* use a fallback state if we didn't match */
> >>> -	switch (dpm_state) {
> >>> -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> >>> -		goto restart_search;
> >>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>> -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>> -		if (adev->pm.dpm.uvd_ps) {
> >>> -			return adev->pm.dpm.uvd_ps;
> >>> -		} else {
> >>> -			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> -			goto restart_search;
> >>> -		}
> >>> -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> >>> -		goto restart_search;
> >>> -	case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>> -		dpm_state = POWER_STATE_TYPE_BATTERY;
> >>> -		goto restart_search;
> >>> -	case POWER_STATE_TYPE_BATTERY:
> >>> -	case POWER_STATE_TYPE_BALANCED:
> >>> -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> -		goto restart_search;
> >>> -	default:
> >>> -		break;
> >>> -	}
> >>> -
> >>> -	return NULL;
> >>> -}
> >>> -
> >>> -static void amdgpu_dpm_change_power_state_locked(struct
> >> amdgpu_device *adev)
> >>> -{
> >>> -	struct amdgpu_ps *ps;
> >>> -	enum amd_pm_state_type dpm_state;
> >>> -	int ret;
> >>> -	bool equal = false;
> >>> -
> >>> -	/* if dpm init failed */
> >>> -	if (!adev->pm.dpm_enabled)
> >>> -		return;
> >>> -
> >>> -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> >>> -		/* add other state override checks here */
> >>> -		if ((!adev->pm.dpm.thermal_active) &&
> >>> -		    (!adev->pm.dpm.uvd_active))
> >>> -			adev->pm.dpm.state = adev->pm.dpm.user_state;
> >>> -	}
> >>> -	dpm_state = adev->pm.dpm.state;
> >>> -
> >>> -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> >>> -	if (ps)
> >>> -		adev->pm.dpm.requested_ps = ps;
> >>> -	else
> >>> -		return;
> >>> -
> >>> -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >>> print_power_state) {
> >>> -		printk("switching from power state:\n");
> >>> -		amdgpu_dpm_print_power_state(adev, adev-
> >>> pm.dpm.current_ps);
> >>> -		printk("switching to power state:\n");
> >>> -		amdgpu_dpm_print_power_state(adev, adev-
> >>> pm.dpm.requested_ps);
> >>> -	}
> >>> -
> >>> -	/* update whether vce is active */
> >>> -	ps->vce_active = adev->pm.dpm.vce_active;
> >>> -	if (adev->powerplay.pp_funcs->display_configuration_changed)
> >>> -		amdgpu_dpm_display_configuration_changed(adev);
> >>> -
> >>> -	ret = amdgpu_dpm_pre_set_power_state(adev);
> >>> -	if (ret)
> >>> -		return;
> >>> -
> >>> -	if (adev->powerplay.pp_funcs->check_state_equal) {
> >>> -		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
> >>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> >>> -			equal = false;
> >>> -	}
> >>> -
> >>> -	if (equal)
> >>> -		return;
> >>> -
> >>> -	if (adev->powerplay.pp_funcs->set_power_state)
> >>> -		adev->powerplay.pp_funcs->set_power_state(adev-
> >>> powerplay.pp_handle);
> >>> -
> >>> -	amdgpu_dpm_post_set_power_state(adev);
> >>> -
> >>> -	adev->pm.dpm.current_active_crtcs = adev-
> >>> pm.dpm.new_active_crtcs;
> >>> -	adev->pm.dpm.current_active_crtc_count = adev-
> >>> pm.dpm.new_active_crtc_count;
> >>> -
> >>> -	if (adev->powerplay.pp_funcs->force_performance_level) {
> >>> -		if (adev->pm.dpm.thermal_active) {
> >>> -			enum amd_dpm_forced_level level = adev-
> >>> pm.dpm.forced_level;
> >>> -			/* force low perf level for thermal */
> >>> -			amdgpu_dpm_force_performance_level(adev,
> >> AMD_DPM_FORCED_LEVEL_LOW);
> >>> -			/* save the user's level */
> >>> -			adev->pm.dpm.forced_level = level;
> >>> -		} else {
> >>> -			/* otherwise, user selected level */
> >>> -			amdgpu_dpm_force_performance_level(adev,
> >> adev->pm.dpm.forced_level);
> >>> -		}
> >>> -	}
> >>> -}
> >>> -
> >>>    void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
> >>>    {
> >>
> >> Rename to amdgpu_dpm_compute_clocks?
> > [Quan, Evan] Sure, I can do that.
> >>
> >>>    	int i = 0;
> >>> @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct
> >> amdgpu_device *adev)
> >>>    			amdgpu_fence_wait_empty(ring);
> >>>    	}
> >>>
> >>> -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
> >>> +	if ((adev->family == AMDGPU_FAMILY_SI) ||
> >>> +	     (adev->family == AMDGPU_FAMILY_KV)) {
> >>> +		amdgpu_dpm_get_active_displays(adev);
> >>> +		adev->powerplay.pp_funcs->change_power_state(adev-
> >>> powerplay.pp_handle);
> >>
> >> It would be clearer if the newly added logic in this function is in
> >> another patch. This does more than what the patch subject says.
> > [Quan, Evan] Actually there are no new logic added. These are for "!adev-
> >powerplay.pp_funcs->dispatch_tasks".
> > Considering there are actually only SI and KV which do not have -
> >dispatch_tasks() implemented.
> > So, I used "((adev->family == AMDGPU_FAMILY_SI) ||(adev->family ==
> AMDGPU_FAMILY_KV))" here.
> > Maybe i should stick with "!adev->powerplay.pp_funcs->dispatch_tasks"?
> 
> This change also adds a new callback change_power_state(). I interpreted
> it as something different from what the patch subject says.
> 
> >>
> >>> +	} else {
> >>>    		if (!amdgpu_device_has_dc_support(adev)) {
> >>> -			mutex_lock(&adev->pm.mutex);
> >>>    			amdgpu_dpm_get_active_displays(adev);
> >>>    			adev->pm.pm_display_cfg.num_display = adev-
> >>> pm.dpm.new_active_crtc_count;
> >>>    			adev->pm.pm_display_cfg.vrefresh =
> >> amdgpu_dpm_get_vrefresh(adev);
> >>> @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct
> >> amdgpu_device *adev)
> >>>    				adev->powerplay.pp_funcs-
> >>> display_configuration_change(
> >>>    							adev-
> >>> powerplay.pp_handle,
> >>>    							&adev-
> >>> pm.pm_display_cfg);
> >>> -			mutex_unlock(&adev->pm.mutex);
> >>>    		}
> >>>    		amdgpu_dpm_dispatch_task(adev,
> >> AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
> >>> -	} else {
> >>> -		mutex_lock(&adev->pm.mutex);
> >>> -		amdgpu_dpm_get_active_displays(adev);
> >>> -		amdgpu_dpm_change_power_state_locked(adev);
> >>> -		mutex_unlock(&adev->pm.mutex);
> >>>    	}
> >>>    }
> >>>
> >>> @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct
> >> amdgpu_device *adev, bool enable)
> >>>    	}
> >>>    }
> >>>
> >>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> >>> -{
> >>> -	int i;
> >>> -
> >>> -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> >>> -		return;
> >>> -
> >>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> >>> -		amdgpu_dpm_print_power_state(adev, &adev-
> >>> pm.dpm.ps[i]);
> >>> -
> >>> -}
> >>> -
> >>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> >> enable)
> >>>    {
> >>>    	int ret = 0;
> >>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> index 01120b302590..295d2902aef7 100644
> >>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>> @@ -366,24 +366,10 @@ enum amdgpu_display_gap
> >>>        AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
> >>>    };
> >>>
> >>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> >>> -void amdgpu_dpm_print_cap_info(u32 caps);
> >>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>> -				struct amdgpu_ps *rps);
> >>>    u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
> >>>    int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
> >> amd_pp_sensors sensor,
> >>>    			   void *data, uint32_t *size);
> >>>
> >>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> >>> -
> >>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
> *adev);
> >>> -void amdgpu_free_extended_power_table(struct amdgpu_device
> >> *adev);
> >>> -
> >>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> >>> -
> >>> -struct amd_vce_state*
> >>> -amdgpu_get_vce_clock_state(void *handle, u32 idx);
> >>> -
> >>>    int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device
> >> *adev,
> >>>    				      uint32_t block_type, bool gate);
> >>>
> >>> @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct
> >> amdgpu_device *adev);
> >>>    void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool
> >> enable);
> >>>    void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
> >> enable);
> >>>    void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
> >> enable);
> >>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> >>>    int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
> >> uint32_t *smu_version);
> >>>    int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> >> enable);
> >>>    int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> >> *adev, uint32_t size);
> >>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >> b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>> index 0fb114adc79f..614d8b6a58ad 100644
> >>> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>> @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix
> >> /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
> >>>
> >>>    include $(AMD_POWERPLAY)
> >>>
> >>> -POWER_MGR-y = amd_powerplay.o
> >>> +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
> >>>
> >>>    POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >> b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>> index 380a5336c74f..90f4c65659e2 100644
> >>> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>> @@ -36,6 +36,7 @@
> >>>
> >>>    #include "gca/gfx_7_2_d.h"
> >>>    #include "gca/gfx_7_2_sh_mask.h"
> >>> +#include "legacy_dpm.h"
> >>>
> >>>    #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
> >>>    #define KV_MINIMUM_ENGINE_CLOCK         800
> >>> @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs
> kv_dpm_funcs
> >> = {
> >>>    	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >>>    	.check_state_equal = kv_check_state_equal,
> >>>    	.read_sensor = &kv_dpm_read_sensor,
> >>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
> >>>    };
> >>>
> >>>    static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
> >>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>
> >> This could get confused with all APIs that support legacy dpms. This
> >> file has only a subset of APIs to support legacy dpm. Needs a better
> >> name - powerplay_ctrl/powerplay_util ?
> > [Quan, Evan] The "legacy_dpm" refers for those logics used only by
> si/kv(si_dpm.c, kv_dpm.c).
> > Considering these logics are not used at default(radeon driver instead of
> amdgpu driver is used to support those legacy ASICs at default).
> > We might drop support for them from our amdgpu driver. So, I gather all
> those APIs and put them in a new holder.
> > Maybe you wrongly treat it as a new holder for powerplay APIs(used by
> VI/AI)?
> 
> As it got moved under powerplay, I thought they were also used in AI/VI
> powerplay. Otherwise, move si/kv along with this out of powerplay and
> keep them separate.
> 
> Thanks,
> Lijo
> 
> >
> > BR
> > Evan
> >>
> >> Thanks,
> >> Lijo
> >>
> >>> new file mode 100644
> >>> index 000000000000..9427c1026e1d
> >>> --- /dev/null
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>> @@ -0,0 +1,1453 @@
> >>> +/*
> >>> + * Copyright 2021 Advanced Micro Devices, Inc.
> >>> + *
> >>> + * Permission is hereby granted, free of charge, to any person obtaining
> a
> >>> + * copy of this software and associated documentation files (the
> >> "Software"),
> >>> + * to deal in the Software without restriction, including without
> limitation
> >>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> >>> + * and/or sell copies of the Software, and to permit persons to whom
> the
> >>> + * Software is furnished to do so, subject to the following conditions:
> >>> + *
> >>> + * The above copyright notice and this permission notice shall be
> included
> >> in
> >>> + * all copies or substantial portions of the Software.
> >>> + *
> >>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> >> KIND, EXPRESS OR
> >>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> >> MERCHANTABILITY,
> >>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> >> NO EVENT SHALL
> >>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> >> DAMAGES OR
> >>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> >> OTHERWISE,
> >>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
> OR
> >> THE USE OR
> >>> + * OTHER DEALINGS IN THE SOFTWARE.
> >>> + */
> >>> +
> >>> +#include "amdgpu.h"
> >>> +#include "amdgpu_atombios.h"
> >>> +#include "amdgpu_i2c.h"
> >>> +#include "atom.h"
> >>> +#include "amd_pcie.h"
> >>> +#include "legacy_dpm.h"
> >>> +
> >>> +#define amdgpu_dpm_pre_set_power_state(adev) \
> >>> +		((adev)->powerplay.pp_funcs-
> >>> pre_set_power_state((adev)->powerplay.pp_handle))
> >>> +
> >>> +#define amdgpu_dpm_post_set_power_state(adev) \
> >>> +		((adev)->powerplay.pp_funcs-
> >>> post_set_power_state((adev)->powerplay.pp_handle))
> >>> +
> >>> +#define amdgpu_dpm_display_configuration_changed(adev) \
> >>> +		((adev)->powerplay.pp_funcs-
> >>> display_configuration_changed((adev)->powerplay.pp_handle))
> >>> +
> >>> +#define amdgpu_dpm_print_power_state(adev, ps) \
> >>> +		((adev)->powerplay.pp_funcs->print_power_state((adev)-
> >>> powerplay.pp_handle, (ps)))
> >>> +
> >>> +#define amdgpu_dpm_vblank_too_short(adev) \
> >>> +		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
> >>> powerplay.pp_handle))
> >>> +
> >>> +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> >>> +		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
> >>> powerplay.pp_handle, (cps), (rps), (equal)))
> >>> +
> >>> +int amdgpu_atombios_get_memory_pll_dividers(struct
> amdgpu_device
> >> *adev,
> >>> +					    u32 clock,
> >>> +					    bool strobe_mode,
> >>> +					    struct atom_mpll_param
> >> *mpll_param)
> >>> +{
> >>> +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
> >>> +	int index = GetIndexIntoMasterTable(COMMAND,
> >> ComputeMemoryClockParam);
> >>> +	u8 frev, crev;
> >>> +
> >>> +	memset(&args, 0, sizeof(args));
> >>> +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> >>> +
> >>> +	if (!amdgpu_atom_parse_cmd_header(adev-
> >>> mode_info.atom_context, index, &frev, &crev))
> >>> +		return -EINVAL;
> >>> +
> >>> +	switch (frev) {
> >>> +	case 2:
> >>> +		switch (crev) {
> >>> +		case 1:
> >>> +			/* SI */
> >>> +			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
> >>> +			args.ucInputFlag = 0;
> >>> +			if (strobe_mode)
> >>> +				args.ucInputFlag |=
> >> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> >>> +
> >>> +			amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> +
> >>> +			mpll_param->clkfrac =
> >> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> >>> +			mpll_param->clkf =
> >> le16_to_cpu(args.ulFbDiv.usFbDiv);
> >>> +			mpll_param->post_div = args.ucPostDiv;
> >>> +			mpll_param->dll_speed = args.ucDllSpeed;
> >>> +			mpll_param->bwcntl = args.ucBWCntl;
> >>> +			mpll_param->vco_mode =
> >>> +				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> >>> +			mpll_param->yclk_sel =
> >>> +				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> >>> +			mpll_param->qdr =
> >>> +				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> >>> +			mpll_param->half_rate =
> >>> +				(args.ucPllCntlFlag &
> >> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> >>> +			break;
> >>> +		default:
> >>> +			return -EINVAL;
> >>> +		}
> >>> +		break;
> >>> +	default:
> >>> +		return -EINVAL;
> >>> +	}
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +void amdgpu_atombios_set_engine_dram_timings(struct
> >> amdgpu_device *adev,
> >>> +					     u32 eng_clock, u32 mem_clock)
> >>> +{
> >>> +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> >>> +	int index = GetIndexIntoMasterTable(COMMAND,
> >> DynamicMemorySettings);
> >>> +	u32 tmp;
> >>> +
> >>> +	memset(&args, 0, sizeof(args));
> >>> +
> >>> +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> >>> +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> >>> +
> >>> +	args.ulTargetEngineClock = cpu_to_le32(tmp);
> >>> +	if (mem_clock)
> >>> +		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> >> SET_CLOCK_FREQ_MASK);
> >>> +
> >>> +	amdgpu_atom_execute_table(adev->mode_info.atom_context,
> >> index, (uint32_t *)&args);
> >>> +}
> >>> +
> >>> +union firmware_info {
> >>> +	ATOM_FIRMWARE_INFO info;
> >>> +	ATOM_FIRMWARE_INFO_V1_2 info_12;
> >>> +	ATOM_FIRMWARE_INFO_V1_3 info_13;
> >>> +	ATOM_FIRMWARE_INFO_V1_4 info_14;
> >>> +	ATOM_FIRMWARE_INFO_V2_1 info_21;
> >>> +	ATOM_FIRMWARE_INFO_V2_2 info_22;
> >>> +};
> >>> +
> >>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> >> *adev,
> >>> +					  u16 *vddc, u16 *vddci, u16 *mvdd)
> >>> +{
> >>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> >>> +	u8 frev, crev;
> >>> +	u16 data_offset;
> >>> +	union firmware_info *firmware_info;
> >>> +
> >>> +	*vddc = 0;
> >>> +	*vddci = 0;
> >>> +	*mvdd = 0;
> >>> +
> >>> +	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> +				   &frev, &crev, &data_offset)) {
> >>> +		firmware_info =
> >>> +			(union firmware_info *)(mode_info->atom_context-
> >>> bios +
> >>> +						data_offset);
> >>> +		*vddc = le16_to_cpu(firmware_info-
> >>> info_14.usBootUpVDDCVoltage);
> >>> +		if ((frev == 2) && (crev >= 2)) {
> >>> +			*vddci = le16_to_cpu(firmware_info-
> >>> info_22.usBootUpVDDCIVoltage);
> >>> +			*mvdd = le16_to_cpu(firmware_info-
> >>> info_22.usBootUpMVDDCVoltage);
> >>> +		}
> >>> +	}
> >>> +}
> >>> +
> >>> +union set_voltage {
> >>> +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> >>> +	struct _SET_VOLTAGE_PARAMETERS v1;
> >>> +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> >>> +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> >>> +};
> >>> +
> >>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> u8
> >> voltage_type,
> >>> +			     u16 voltage_id, u16 *voltage)
> >>> +{
> >>> +	union set_voltage args;
> >>> +	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
> >>> +	u8 frev, crev;
> >>> +
> >>> +	if (!amdgpu_atom_parse_cmd_header(adev-
> >>> mode_info.atom_context, index, &frev, &crev))
> >>> +		return -EINVAL;
> >>> +
> >>> +	switch (crev) {
> >>> +	case 1:
> >>> +		return -EINVAL;
> >>> +	case 2:
> >>> +		args.v2.ucVoltageType =
> >> SET_VOLTAGE_GET_MAX_VOLTAGE;
> >>> +		args.v2.ucVoltageMode = 0;
> >>> +		args.v2.usVoltageLevel = 0;
> >>> +
> >>> +		amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> +
> >>> +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> >>> +		break;
> >>> +	case 3:
> >>> +		args.v3.ucVoltageType = voltage_type;
> >>> +		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
> >>> +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> >>> +
> >>> +		amdgpu_atom_execute_table(adev-
> >>> mode_info.atom_context, index, (uint32_t *)&args);
> >>> +
> >>> +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> >>> +		break;
> >>> +	default:
> >>> +		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
> >>> +		return -EINVAL;
> >>> +	}
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +int
> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >> amdgpu_device *adev,
> >>> +						      u16 *voltage,
> >>> +						      u16 leakage_idx)
> >>> +{
> >>> +	return amdgpu_atombios_get_max_vddc(adev,
> >> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> >>> +}
> >>> +
> >>> +union voltage_object_info {
> >>> +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> >>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> >>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> >>> +};
> >>> +
> >>> +union voltage_object {
> >>> +	struct _ATOM_VOLTAGE_OBJECT v1;
> >>> +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> >>> +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> >>> +};
> >>> +
> >>> +static ATOM_VOLTAGE_OBJECT_V3
> >>
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> >> NFO_V3_1 *v3,
> >>> +									u8
> >> voltage_type, u8 voltage_mode)
> >>> +{
> >>> +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> >>> +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> >> asVoltageObj[0]);
> >>> +	u8 *start = (u8 *)v3;
> >>> +
> >>> +	while (offset < size) {
> >>> +		ATOM_VOLTAGE_OBJECT_V3 *vo =
> >> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> >>> +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
> >> voltage_type) &&
> >>> +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
> >> voltage_mode))
> >>> +			return vo;
> >>> +		offset += le16_to_cpu(vo-
> >>> asGpioVoltageObj.sHeader.usSize);
> >>> +	}
> >>> +	return NULL;
> >>> +}
> >>> +
> >>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>> +			      u8 voltage_type,
> >>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> >>> +{
> >>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> +	u8 frev, crev;
> >>> +	u16 data_offset, size;
> >>> +	union voltage_object_info *voltage_info;
> >>> +	union voltage_object *voltage_object = NULL;
> >>> +
> >>> +	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> +				   &frev, &crev, &data_offset)) {
> >>> +		voltage_info = (union voltage_object_info *)
> >>> +			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> +
> >>> +		switch (frev) {
> >>> +		case 3:
> >>> +			switch (crev) {
> >>> +			case 1:
> >>> +				voltage_object = (union voltage_object *)
> >>> +
> >> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> +
> >> voltage_type,
> >>> +
> >> VOLTAGE_OBJ_SVID2);
> >>> +				if (voltage_object) {
> >>> +					*svd_gpio_id = voltage_object-
> >>> v3.asSVID2Obj.ucSVDGpioId;
> >>> +					*svc_gpio_id = voltage_object-
> >>> v3.asSVID2Obj.ucSVCGpioId;
> >>> +				} else {
> >>> +					return -EINVAL;
> >>> +				}
> >>> +				break;
> >>> +			default:
> >>> +				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> +				return -EINVAL;
> >>> +			}
> >>> +			break;
> >>> +		default:
> >>> +			DRM_ERROR("unknown voltage object table\n");
> >>> +			return -EINVAL;
> >>> +		}
> >>> +
> >>> +	}
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +bool
> >>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>> +				u8 voltage_type, u8 voltage_mode)
> >>> +{
> >>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> +	u8 frev, crev;
> >>> +	u16 data_offset, size;
> >>> +	union voltage_object_info *voltage_info;
> >>> +
> >>> +	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> +				   &frev, &crev, &data_offset)) {
> >>> +		voltage_info = (union voltage_object_info *)
> >>> +			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> +
> >>> +		switch (frev) {
> >>> +		case 3:
> >>> +			switch (crev) {
> >>> +			case 1:
> >>> +				if
> >> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> +
> >> voltage_type, voltage_mode))
> >>> +					return true;
> >>> +				break;
> >>> +			default:
> >>> +				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> +				return false;
> >>> +			}
> >>> +			break;
> >>> +		default:
> >>> +			DRM_ERROR("unknown voltage object table\n");
> >>> +			return false;
> >>> +		}
> >>> +
> >>> +	}
> >>> +	return false;
> >>> +}
> >>> +
> >>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> *adev,
> >>> +				      u8 voltage_type, u8 voltage_mode,
> >>> +				      struct atom_voltage_table *voltage_table)
> >>> +{
> >>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
> >>> +	u8 frev, crev;
> >>> +	u16 data_offset, size;
> >>> +	int i;
> >>> +	union voltage_object_info *voltage_info;
> >>> +	union voltage_object *voltage_object = NULL;
> >>> +
> >>> +	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> +				   &frev, &crev, &data_offset)) {
> >>> +		voltage_info = (union voltage_object_info *)
> >>> +			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> +
> >>> +		switch (frev) {
> >>> +		case 3:
> >>> +			switch (crev) {
> >>> +			case 1:
> >>> +				voltage_object = (union voltage_object *)
> >>> +
> >> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>> +
> >> voltage_type, voltage_mode);
> >>> +				if (voltage_object) {
> >>> +					ATOM_GPIO_VOLTAGE_OBJECT_V3
> >> *gpio =
> >>> +						&voltage_object-
> >>> v3.asGpioVoltageObj;
> >>> +					VOLTAGE_LUT_ENTRY_V2 *lut;
> >>> +					if (gpio->ucGpioEntryNum >
> >> MAX_VOLTAGE_ENTRIES)
> >>> +						return -EINVAL;
> >>> +					lut = &gpio->asVolGpioLut[0];
> >>> +					for (i = 0; i < gpio->ucGpioEntryNum;
> >> i++) {
> >>> +						voltage_table-
> >>> entries[i].value =
> >>> +							le16_to_cpu(lut-
> >>> usVoltageValue);
> >>> +						voltage_table-
> >>> entries[i].smio_low =
> >>> +							le32_to_cpu(lut-
> >>> ulVoltageId);
> >>> +						lut =
> >> (VOLTAGE_LUT_ENTRY_V2 *)
> >>> +							((u8 *)lut +
> >> sizeof(VOLTAGE_LUT_ENTRY_V2));
> >>> +					}
> >>> +					voltage_table->mask_low =
> >> le32_to_cpu(gpio->ulGpioMaskVal);
> >>> +					voltage_table->count = gpio-
> >>> ucGpioEntryNum;
> >>> +					voltage_table->phase_delay = gpio-
> >>> ucPhaseDelay;
> >>> +					return 0;
> >>> +				}
> >>> +				break;
> >>> +			default:
> >>> +				DRM_ERROR("unknown voltage object
> >> table\n");
> >>> +				return -EINVAL;
> >>> +			}
> >>> +			break;
> >>> +		default:
> >>> +			DRM_ERROR("unknown voltage object table\n");
> >>> +			return -EINVAL;
> >>> +		}
> >>> +	}
> >>> +	return -EINVAL;
> >>> +}
> >>> +
> >>> +union vram_info {
> >>> +	struct _ATOM_VRAM_INFO_V3 v1_3;
> >>> +	struct _ATOM_VRAM_INFO_V4 v1_4;
> >>> +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> >>> +};
> >>> +
> >>> +#define MEM_ID_MASK           0xff000000
> >>> +#define MEM_ID_SHIFT          24
> >>> +#define CLOCK_RANGE_MASK      0x00ffffff
> >>> +#define CLOCK_RANGE_SHIFT     0
> >>> +#define LOW_NIBBLE_MASK       0xf
> >>> +#define DATA_EQU_PREV         0
> >>> +#define DATA_FROM_TABLE       4
> >>> +
> >>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> *adev,
> >>> +				      u8 module_index,
> >>> +				      struct atom_mc_reg_table *reg_table)
> >>> +{
> >>> +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> >>> +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> >>> +	u32 i = 0, j;
> >>> +	u16 data_offset, size;
> >>> +	union vram_info *vram_info;
> >>> +
> >>> +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> >>> +
> >>> +	if (amdgpu_atom_parse_data_header(adev-
> >>> mode_info.atom_context, index, &size,
> >>> +				   &frev, &crev, &data_offset)) {
> >>> +		vram_info = (union vram_info *)
> >>> +			(adev->mode_info.atom_context->bios +
> >> data_offset);
> >>> +		switch (frev) {
> >>> +		case 1:
> >>> +			DRM_ERROR("old table version %d, %d\n", frev,
> >> crev);
> >>> +			return -EINVAL;
> >>> +		case 2:
> >>> +			switch (crev) {
> >>> +			case 1:
> >>> +				if (module_index < vram_info-
> >>> v2_1.ucNumOfVRAMModule) {
> >>> +					ATOM_INIT_REG_BLOCK *reg_block
> >> =
> >>> +						(ATOM_INIT_REG_BLOCK *)
> >>> +						((u8 *)vram_info +
> >> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> >>> +
> >> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> >>> +
> >> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>> +						((u8 *)reg_block + (2 *
> >> sizeof(u16)) +
> >>> +						 le16_to_cpu(reg_block-
> >>> usRegIndexTblSize));
> >>> +					ATOM_INIT_REG_INDEX_FORMAT
> >> *format = &reg_block->asRegIndexBuf[0];
> >>> +					num_entries =
> >> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> >>> +
> >> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> >>> +					if (num_entries >
> >> VBIOS_MC_REGISTER_ARRAY_SIZE)
> >>> +						return -EINVAL;
> >>> +					while (i < num_entries) {
> >>> +						if (format-
> >>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
> >>> +							break;
> >>> +						reg_table-
> >>> mc_reg_address[i].s1 =
> >>> +
> >> 	(u16)(le16_to_cpu(format->usRegIndex));
> >>> +						reg_table-
> >>> mc_reg_address[i].pre_reg_data =
> >>> +							(u8)(format-
> >>> ucPreRegDataLength);
> >>> +						i++;
> >>> +						format =
> >> (ATOM_INIT_REG_INDEX_FORMAT *)
> >>> +							((u8 *)format +
> >> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> >>> +					}
> >>> +					reg_table->last = i;
> >>> +					while ((le32_to_cpu(*(u32
> >> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> >>> +					       (num_ranges <
> >> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> >>> +						t_mem_id =
> >> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> >>> +								>>
> >> MEM_ID_SHIFT);
> >>> +						if (module_index ==
> >> t_mem_id) {
> >>> +							reg_table-
> >>> mc_reg_table_entry[num_ranges].mclk_max =
> >>> +
> >> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> >>> +								      >>
> >> CLOCK_RANGE_SHIFT);
> >>> +							for (i = 0, j = 1; i <
> >> reg_table->last; i++) {
> >>> +								if ((reg_table-
> >>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >> DATA_FROM_TABLE) {
> >>> +
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>> +
> >> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> >>> +									j++;
> >>> +								} else if
> >> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >> DATA_EQU_PREV) {
> >>> +
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>> +
> >> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> >>> +								}
> >>> +							}
> >>> +							num_ranges++;
> >>> +						}
> >>> +						reg_data =
> >> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>> +							((u8 *)reg_data +
> >> le16_to_cpu(reg_block->usRegDataBlkSize));
> >>> +					}
> >>> +					if (le32_to_cpu(*(u32 *)reg_data) !=
> >> END_OF_REG_DATA_BLOCK)
> >>> +						return -EINVAL;
> >>> +					reg_table->num_entries =
> >> num_ranges;
> >>> +				} else
> >>> +					return -EINVAL;
> >>> +				break;
> >>> +			default:
> >>> +				DRM_ERROR("Unknown table
> >> version %d, %d\n", frev, crev);
> >>> +				return -EINVAL;
> >>> +			}
> >>> +			break;
> >>> +		default:
> >>> +			DRM_ERROR("Unknown table version %d, %d\n",
> >> frev, crev);
> >>> +			return -EINVAL;
> >>> +		}
> >>> +		return 0;
> >>> +	}
> >>> +	return -EINVAL;
> >>> +}
> >>> +
> >>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> >>> +{
> >>> +	const char *s;
> >>> +
> >>> +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> >>> +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> >>> +	default:
> >>> +		s = "none";
> >>> +		break;
> >>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> >>> +		s = "battery";
> >>> +		break;
> >>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> >>> +		s = "balanced";
> >>> +		break;
> >>> +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> >>> +		s = "performance";
> >>> +		break;
> >>> +	}
> >>> +	printk("\tui class: %s\n", s);
> >>> +	printk("\tinternal class:");
> >>> +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
> >>> +	    (class2 == 0))
> >>> +		pr_cont(" none");
> >>> +	else {
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> >>> +			pr_cont(" boot");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>> +			pr_cont(" thermal");
> >>> +		if (class &
> >> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> >>> +			pr_cont(" limited_pwr");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> >>> +			pr_cont(" rest");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> >>> +			pr_cont(" forced");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>> +			pr_cont(" 3d_perf");
> >>> +		if (class &
> >> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> >>> +			pr_cont(" ovrdrv");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> >>> +			pr_cont(" uvd");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> >>> +			pr_cont(" 3d_low");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>> +			pr_cont(" acpi");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>> +			pr_cont(" uvd_hd2");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>> +			pr_cont(" uvd_hd");
> >>> +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>> +			pr_cont(" uvd_sd");
> >>> +		if (class2 &
> >> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> >>> +			pr_cont(" limited_pwr2");
> >>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>> +			pr_cont(" ulv");
> >>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>> +			pr_cont(" uvd_mvc");
> >>> +	}
> >>> +	pr_cont("\n");
> >>> +}
> >>> +
> >>> +void amdgpu_dpm_print_cap_info(u32 caps)
> >>> +{
> >>> +	printk("\tcaps:");
> >>> +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> >>> +		pr_cont(" single_disp");
> >>> +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> >>> +		pr_cont(" video");
> >>> +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> >>> +		pr_cont(" no_dc");
> >>> +	pr_cont("\n");
> >>> +}
> >>> +
> >>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>> +				struct amdgpu_ps *rps)
> >>> +{
> >>> +	printk("\tstatus:");
> >>> +	if (rps == adev->pm.dpm.current_ps)
> >>> +		pr_cont(" c");
> >>> +	if (rps == adev->pm.dpm.requested_ps)
> >>> +		pr_cont(" r");
> >>> +	if (rps == adev->pm.dpm.boot_ps)
> >>> +		pr_cont(" b");
> >>> +	pr_cont("\n");
> >>> +}
> >>> +
> >>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> >>> +{
> >>> +	int i;
> >>> +
> >>> +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> >>> +		return;
> >>> +
> >>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> >>> +		amdgpu_dpm_print_power_state(adev, &adev-
> >>> pm.dpm.ps[i]);
> >>> +
> >>> +}
> >>> +
> >>> +union power_info {
> >>> +	struct _ATOM_POWERPLAY_INFO info;
> >>> +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> >>> +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> >>> +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> >>> +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> >>> +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> >>> +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> >>> +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> >>> +};
> >>> +
> >>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> >>> +{
> >>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> +	union power_info *power_info;
> >>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> +	u16 data_offset;
> >>> +	u8 frev, crev;
> >>> +
> >>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> +				   &frev, &crev, &data_offset))
> >>> +		return -EINVAL;
> >>> +	power_info = (union power_info *)(mode_info->atom_context-
> >>> bios + data_offset);
> >>> +
> >>> +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >>> pplib.ulPlatformCaps);
> >>> +	adev->pm.dpm.backbias_response_time =
> >> le16_to_cpu(power_info->pplib.usBackbiasTime);
> >>> +	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
> >>> pplib.usVoltageTime);
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +union fan_info {
> >>> +	struct _ATOM_PPLIB_FANTABLE fan;
> >>> +	struct _ATOM_PPLIB_FANTABLE2 fan2;
> >>> +	struct _ATOM_PPLIB_FANTABLE3 fan3;
> >>> +};
> >>> +
> >>> +static int amdgpu_parse_clk_voltage_dep_table(struct
> >> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> >>> +
> >> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> >>> +{
> >>> +	u32 size = atom_table->ucNumEntries *
> >>> +		sizeof(struct amdgpu_clock_voltage_dependency_entry);
> >>> +	int i;
> >>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> >>> +
> >>> +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> >>> +	if (!amdgpu_table->entries)
> >>> +		return -ENOMEM;
> >>> +
> >>> +	entry = &atom_table->entries[0];
> >>> +	for (i = 0; i < atom_table->ucNumEntries; i++) {
> >>> +		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >>> usClockLow) |
> >>> +			(entry->ucClockHigh << 16);
> >>> +		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >>> usVoltage);
> >>> +		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> >> *)
> >>> +			((u8 *)entry +
> >> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> >>> +	}
> >>> +	amdgpu_table->count = atom_table->ucNumEntries;
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> >>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> >>> +
> >>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
> *adev)
> >>> +{
> >>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> +	union power_info *power_info;
> >>> +	union fan_info *fan_info;
> >>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
> >>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> +	u16 data_offset;
> >>> +	u8 frev, crev;
> >>> +	int ret, i;
> >>> +
> >>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> +				   &frev, &crev, &data_offset))
> >>> +		return -EINVAL;
> >>> +	power_info = (union power_info *)(mode_info->atom_context-
> >>> bios + data_offset);
> >>> +
> >>> +	/* fan table */
> >>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>> +		if (power_info->pplib3.usFanTableOffset) {
> >>> +			fan_info = (union fan_info *)(mode_info-
> >>> atom_context->bios + data_offset +
> >>> +						      le16_to_cpu(power_info-
> >>> pplib3.usFanTableOffset));
> >>> +			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
> >>> +			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
> >>> fan.usTMin);
> >>> +			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
> >>> fan.usTMed);
> >>> +			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
> >>> fan.usTHigh);
> >>> +			adev->pm.dpm.fan.pwm_min =
> >> le16_to_cpu(fan_info->fan.usPWMMin);
> >>> +			adev->pm.dpm.fan.pwm_med =
> >> le16_to_cpu(fan_info->fan.usPWMMed);
> >>> +			adev->pm.dpm.fan.pwm_high =
> >> le16_to_cpu(fan_info->fan.usPWMHigh);
> >>> +			if (fan_info->fan.ucFanTableFormat >= 2)
> >>> +				adev->pm.dpm.fan.t_max =
> >> le16_to_cpu(fan_info->fan2.usTMax);
> >>> +			else
> >>> +				adev->pm.dpm.fan.t_max = 10900;
> >>> +			adev->pm.dpm.fan.cycle_delay = 100000;
> >>> +			if (fan_info->fan.ucFanTableFormat >= 3) {
> >>> +				adev->pm.dpm.fan.control_mode =
> >> fan_info->fan3.ucFanControlMode;
> >>> +				adev->pm.dpm.fan.default_max_fan_pwm
> >> =
> >>> +					le16_to_cpu(fan_info-
> >>> fan3.usFanPWMMax);
> >>> +				adev-
> >>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
> >>> +				adev->pm.dpm.fan.fan_output_sensitivity =
> >>> +					le16_to_cpu(fan_info-
> >>> fan3.usFanOutputSensitivity);
> >>> +			}
> >>> +			adev->pm.dpm.fan.ucode_fan_control = true;
> >>> +		}
> >>> +	}
> >>> +
> >>> +	/* clock dependancy tables, shedding tables */
> >>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> >>> +		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
> >>> +			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usVddcDependencyOnSCLKOffset));
> >>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
> >>> +								 dep_table);
> >>> +			if (ret) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return ret;
> >>> +			}
> >>> +		}
> >>> +		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
> >>> +			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usVddciDependencyOnMCLKOffset));
> >>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
> >>> +								 dep_table);
> >>> +			if (ret) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return ret;
> >>> +			}
> >>> +		}
> >>> +		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
> >>> +			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usVddcDependencyOnMCLKOffset));
> >>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
> >>> +								 dep_table);
> >>> +			if (ret) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return ret;
> >>> +			}
> >>> +		}
> >>> +		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
> >> {
> >>> +			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usMvddDependencyOnMCLKOffset));
> >>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
> >>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> >>> +								 dep_table);
> >>> +			if (ret) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return ret;
> >>> +			}
> >>> +		}
> >>> +		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
> >>> +			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
> >>> +				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usMaxClockVoltageOnDCOffset));
> >>> +			if (clk_v->ucNumEntries) {
> >>> +				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> >>> +					le16_to_cpu(clk_v-
> >>> entries[0].usSclkLow) |
> >>> +					(clk_v->entries[0].ucSclkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> >>> +					le16_to_cpu(clk_v-
> >>> entries[0].usMclkLow) |
> >>> +					(clk_v->entries[0].ucMclkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> >>> +					le16_to_cpu(clk_v-
> >>> entries[0].usVddc);
> >>> +				adev-
> >>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> >>> +					le16_to_cpu(clk_v-
> >>> entries[0].usVddci);
> >>> +			}
> >>> +		}
> >>> +		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
> >> {
> >>> +			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
> >>> +				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib4.usVddcPhaseShedLimitsTableOffset));
> >>> +			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
> >>> +
> >>> +			adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> >>> +				kcalloc(psl->ucNumEntries,
> >>> +					sizeof(struct
> >> amdgpu_phase_shedding_limits_entry),
> >>> +					GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +
> >>> +			entry = &psl->entries[0];
> >>> +			for (i = 0; i < psl->ucNumEntries; i++) {
> >>> +				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> >>> +					le16_to_cpu(entry->usSclkLow) |
> >> (entry->ucSclkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> >>> +					le16_to_cpu(entry->usMclkLow) |
> >> (entry->ucMclkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> >>> +					le16_to_cpu(entry->usVoltage);
> >>> +				entry =
> >> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> >>> +			}
> >>> +			adev-
> >>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
> >>> +				psl->ucNumEntries;
> >>> +		}
> >>> +	}
> >>> +
> >>> +	/* cac data */
> >>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> >>> +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >>> pplib5.ulTDPLimit);
> >>> +		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
> >>> pplib5.ulNearTDPLimit);
> >>> +		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >>> pm.dpm.near_tdp_limit;
> >>> +		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
> >>> pplib5.usTDPODLimit);
> >>> +		if (adev->pm.dpm.tdp_od_limit)
> >>> +			adev->pm.dpm.power_control = true;
> >>> +		else
> >>> +			adev->pm.dpm.power_control = false;
> >>> +		adev->pm.dpm.tdp_adjustment = 0;
> >>> +		adev->pm.dpm.sq_ramping_threshold =
> >> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> >>> +		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
> >>> pplib5.ulCACLeakage);
> >>> +		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
> >>> pplib5.usLoadLineSlope);
> >>> +		if (power_info->pplib5.usCACLeakageTableOffset) {
> >>> +			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
> >>> +				(ATOM_PPLIB_CAC_Leakage_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(power_info-
> >>> pplib5.usCACLeakageTableOffset));
> >>> +			ATOM_PPLIB_CAC_Leakage_Record *entry;
> >>> +			u32 size = cac_table->ucNumEntries * sizeof(struct
> >> amdgpu_cac_leakage_table);
> >>> +			adev->pm.dpm.dyn_state.cac_leakage_table.entries
> >> = kzalloc(size, GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			entry = &cac_table->entries[0];
> >>> +			for (i = 0; i < cac_table->ucNumEntries; i++) {
> >>> +				if (adev->pm.dpm.platform_caps &
> >> ATOM_PP_PLATFORM_CAP_EVV) {
> >>> +					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> >>> +						le16_to_cpu(entry-
> >>> usVddc1);
> >>> +					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> >>> +						le16_to_cpu(entry-
> >>> usVddc2);
> >>> +					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> >>> +						le16_to_cpu(entry-
> >>> usVddc3);
> >>> +				} else {
> >>> +					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> >>> +						le16_to_cpu(entry->usVddc);
> >>> +					adev-
> >>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> >>> +						le32_to_cpu(entry-
> >>> ulLeakageValue);
> >>> +				}
> >>> +				entry = (ATOM_PPLIB_CAC_Leakage_Record
> >> *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> >>> +			}
> >>> +			adev->pm.dpm.dyn_state.cac_leakage_table.count
> >> = cac_table->ucNumEntries;
> >>> +		}
> >>> +	}
> >>> +
> >>> +	/* ext tables */
> >>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>> +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> >> (ATOM_PPLIB_EXTENDEDHEADER *)
> >>> +			(mode_info->atom_context->bios + data_offset +
> >>> +			 le16_to_cpu(power_info-
> >>> pplib3.usExtendendedHeaderOffset));
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> >>> +			ext_hdr->usVCETableOffset) {
> >>> +			VCEClockInfoArray *array = (VCEClockInfoArray *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1);
> >>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> >> *limits =
> >>> +
> >> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1 +
> >>> +				 1 + array->ucNumEntries *
> >> sizeof(VCEClockInfo));
> >>> +			ATOM_PPLIB_VCE_State_Table *states =
> >>> +				(ATOM_PPLIB_VCE_State_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
> >> 1 +
> >>> +				 1 + (array->ucNumEntries * sizeof
> >> (VCEClockInfo)) +
> >>> +				 1 + (limits->numEntries *
> >> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> >>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> >> *entry;
> >>> +			ATOM_PPLIB_VCE_State_Record *state_entry;
> >>> +			VCEClockInfo *vce_clk;
> >>> +			u32 size = limits->numEntries *
> >>> +				sizeof(struct
> >> amdgpu_vce_clock_voltage_dependency_entry);
> >>> +			adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> >>> +				kzalloc(size, GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> >>> +				limits->numEntries;
> >>> +			entry = &limits->entries[0];
> >>> +			state_entry = &states->entries[0];
> >>> +			for (i = 0; i < limits->numEntries; i++) {
> >>> +				vce_clk = (VCEClockInfo *)
> >>> +					((u8 *)&array->entries[0] +
> >>> +					 (entry->ucVCEClockInfoIndex *
> >> sizeof(VCEClockInfo)));
> >>> +				adev-
> >>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> >> =
> >>> +					le16_to_cpu(vce_clk->usEVClkLow) |
> >> (vce_clk->ucEVClkHigh << 16);
> >>> +				adev-
> >>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> >> =
> >>> +					le16_to_cpu(vce_clk->usECClkLow) |
> >> (vce_clk->ucECClkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> >>> +					le16_to_cpu(entry->usVoltage);
> >>> +				entry =
> >> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> >>> +			}
> >>> +			adev->pm.dpm.num_of_vce_states =
> >>> +					states->numEntries >
> >> AMD_MAX_VCE_LEVELS ?
> >>> +					AMD_MAX_VCE_LEVELS : states-
> >>> numEntries;
> >>> +			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
> >> {
> >>> +				vce_clk = (VCEClockInfo *)
> >>> +					((u8 *)&array->entries[0] +
> >>> +					 (state_entry->ucVCEClockInfoIndex
> >> * sizeof(VCEClockInfo)));
> >>> +				adev->pm.dpm.vce_states[i].evclk =
> >>> +					le16_to_cpu(vce_clk->usEVClkLow) |
> >> (vce_clk->ucEVClkHigh << 16);
> >>> +				adev->pm.dpm.vce_states[i].ecclk =
> >>> +					le16_to_cpu(vce_clk->usECClkLow) |
> >> (vce_clk->ucECClkHigh << 16);
> >>> +				adev->pm.dpm.vce_states[i].clk_idx =
> >>> +					state_entry->ucClockInfoIndex &
> >> 0x3f;
> >>> +				adev->pm.dpm.vce_states[i].pstate =
> >>> +					(state_entry->ucClockInfoIndex &
> >> 0xc0) >> 6;
> >>> +				state_entry =
> >> (ATOM_PPLIB_VCE_State_Record *)
> >>> +					((u8 *)state_entry +
> >> sizeof(ATOM_PPLIB_VCE_State_Record));
> >>> +			}
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> >>> +			ext_hdr->usUVDTableOffset) {
> >>> +			UVDClockInfoArray *array = (UVDClockInfoArray *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> >> 1);
> >>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> >> *limits =
> >>> +
> >> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
> >> 1 +
> >>> +				 1 + (array->ucNumEntries * sizeof
> >> (UVDClockInfo)));
> >>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> >> *entry;
> >>> +			u32 size = limits->numEntries *
> >>> +				sizeof(struct
> >> amdgpu_uvd_clock_voltage_dependency_entry);
> >>> +			adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> >>> +				kzalloc(size, GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> >>> +				limits->numEntries;
> >>> +			entry = &limits->entries[0];
> >>> +			for (i = 0; i < limits->numEntries; i++) {
> >>> +				UVDClockInfo *uvd_clk = (UVDClockInfo *)
> >>> +					((u8 *)&array->entries[0] +
> >>> +					 (entry->ucUVDClockInfoIndex *
> >> sizeof(UVDClockInfo)));
> >>> +				adev-
> >>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
> >>> +					le16_to_cpu(uvd_clk->usVClkLow) |
> >> (uvd_clk->ucVClkHigh << 16);
> >>> +				adev-
> >>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
> >>> +					le16_to_cpu(uvd_clk->usDClkLow) |
> >> (uvd_clk->ucDClkHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> >>> +					le16_to_cpu(entry->usVoltage);
> >>> +				entry =
> >> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> >>> +			}
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> >>> +			ext_hdr->usSAMUTableOffset) {
> >>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
> >>> +				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> >> *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
> >> + 1);
> >>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> >>> +			u32 size = limits->numEntries *
> >>> +				sizeof(struct
> >> amdgpu_clock_voltage_dependency_entry);
> >>> +			adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
> >>> +				kzalloc(size, GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> >>> +				limits->numEntries;
> >>> +			entry = &limits->entries[0];
> >>> +			for (i = 0; i < limits->numEntries; i++) {
> >>> +				adev-
> >>>
> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
> >>> +					le16_to_cpu(entry->usSAMClockLow)
> >> | (entry->ucSAMClockHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
> =
> >>> +					le16_to_cpu(entry->usVoltage);
> >>> +				entry =
> >> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> >>> +			}
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> >>> +		    ext_hdr->usPPMTableOffset) {
> >>> +			ATOM_PPLIB_PPM_Table *ppm =
> >> (ATOM_PPLIB_PPM_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usPPMTableOffset));
> >>> +			adev->pm.dpm.dyn_state.ppm_table =
> >>> +				kzalloc(sizeof(struct amdgpu_ppm_table),
> >> GFP_KERNEL);
> >>> +			if (!adev->pm.dpm.dyn_state.ppm_table) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			adev->pm.dpm.dyn_state.ppm_table->ppm_design
> >> = ppm->ucPpmDesign;
> >>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>> cpu_core_number =
> >>> +				le16_to_cpu(ppm->usCpuCoreNumber);
> >>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>> platform_tdp =
> >>> +				le32_to_cpu(ppm->ulPlatformTDP);
> >>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>> small_ac_platform_tdp =
> >>> +				le32_to_cpu(ppm->ulSmallACPlatformTDP);
> >>> +			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
> >> =
> >>> +				le32_to_cpu(ppm->ulPlatformTDC);
> >>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>> small_ac_platform_tdc =
> >>> +				le32_to_cpu(ppm->ulSmallACPlatformTDC);
> >>> +			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
> >>> +				le32_to_cpu(ppm->ulApuTDP);
> >>> +			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
> >>> +				le32_to_cpu(ppm->ulDGpuTDP);
> >>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>> dgpu_ulv_power =
> >>> +				le32_to_cpu(ppm->ulDGpuUlvPower);
> >>> +			adev->pm.dpm.dyn_state.ppm_table->tj_max =
> >>> +				le32_to_cpu(ppm->ulTjmax);
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> >>> +			ext_hdr->usACPTableOffset) {
> >>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
> >>> +				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> >> *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr->usACPTableOffset) +
> >> 1);
> >>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
> >>> +			u32 size = limits->numEntries *
> >>> +				sizeof(struct
> >> amdgpu_clock_voltage_dependency_entry);
> >>> +			adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> >>> +				kzalloc(size, GFP_KERNEL);
> >>> +			if (!adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> >>> +				limits->numEntries;
> >>> +			entry = &limits->entries[0];
> >>> +			for (i = 0; i < limits->numEntries; i++) {
> >>> +				adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
> =
> >>> +					le16_to_cpu(entry->usACPClockLow)
> >> | (entry->ucACPClockHigh << 16);
> >>> +				adev-
> >>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> >>> +					le16_to_cpu(entry->usVoltage);
> >>> +				entry =
> >> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> >>> +					((u8 *)entry +
> >> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> >>> +			}
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> >>> +			ext_hdr->usPowerTuneTableOffset) {
> >>> +			u8 rev = *(u8 *)(mode_info->atom_context->bios +
> >> data_offset +
> >>> +					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> +			ATOM_PowerTune_Table *pt;
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table =
> >>> +				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
> >> GFP_KERNEL);
> >>> +			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
> >>> +
> >> 	amdgpu_free_extended_power_table(adev);
> >>> +				return -ENOMEM;
> >>> +			}
> >>> +			if (rev > 0) {
> >>> +				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> >> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> >>> +					(mode_info->atom_context->bios +
> >> data_offset +
> >>> +					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> maximum_power_delivery_limit =
> >>> +					ppt->usMaximumPowerDeliveryLimit;
> >>> +				pt = &ppt->power_tune_table;
> >>> +			} else {
> >>> +				ATOM_PPLIB_POWERTUNE_Table *ppt =
> >> (ATOM_PPLIB_POWERTUNE_Table *)
> >>> +					(mode_info->atom_context->bios +
> >> data_offset +
> >>> +					 le16_to_cpu(ext_hdr-
> >>> usPowerTuneTableOffset));
> >>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> maximum_power_delivery_limit = 255;
> >>> +				pt = &ppt->power_tune_table;
> >>> +			}
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
> >> le16_to_cpu(pt->usTDP);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> configurable_tdp =
> >>> +				le16_to_cpu(pt->usConfigurableTDP);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
> >> le16_to_cpu(pt->usTDC);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> battery_power_limit =
> >>> +				le16_to_cpu(pt->usBatteryPowerLimit);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> small_power_limit =
> >>> +				le16_to_cpu(pt->usSmallPowerLimit);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> low_cac_leakage =
> >>> +				le16_to_cpu(pt->usLowCACLeakage);
> >>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>> high_cac_leakage =
> >>> +				le16_to_cpu(pt->usHighCACLeakage);
> >>> +		}
> >>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> >>> +				ext_hdr->usSclkVddgfxTableOffset) {
> >>> +			dep_table =
> >> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>> +				(mode_info->atom_context->bios +
> >> data_offset +
> >>> +				 le16_to_cpu(ext_hdr-
> >>> usSclkVddgfxTableOffset));
> >>> +			ret = amdgpu_parse_clk_voltage_dep_table(
> >>> +					&adev-
> >>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> >>> +					dep_table);
> >>> +			if (ret) {
> >>> +				kfree(adev-
> >>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> >>> +				return ret;
> >>> +			}
> >>> +		}
> >>> +	}
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +void amdgpu_free_extended_power_table(struct amdgpu_device
> *adev)
> >>> +{
> >>> +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >>> pm.dpm.dyn_state;
> >>> +
> >>> +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> >>> +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> >>> +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> >>> +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> >>> +	kfree(dyn_state->cac_leakage_table.entries);
> >>> +	kfree(dyn_state->phase_shedding_limits_table.entries);
> >>> +	kfree(dyn_state->ppm_table);
> >>> +	kfree(dyn_state->cac_tdp_table);
> >>> +	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
> >>> +	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
> >>> +	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
> >>> +	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
> >>> +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> >>> +}
> >>> +
> >>> +static const char *pp_lib_thermal_controller_names[] = {
> >>> +	"NONE",
> >>> +	"lm63",
> >>> +	"adm1032",
> >>> +	"adm1030",
> >>> +	"max6649",
> >>> +	"lm64",
> >>> +	"f75375",
> >>> +	"RV6xx",
> >>> +	"RV770",
> >>> +	"adt7473",
> >>> +	"NONE",
> >>> +	"External GPIO",
> >>> +	"Evergreen",
> >>> +	"emc2103",
> >>> +	"Sumo",
> >>> +	"Northern Islands",
> >>> +	"Southern Islands",
> >>> +	"lm96163",
> >>> +	"Sea Islands",
> >>> +	"Kaveri/Kabini",
> >>> +};
> >>> +
> >>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> >>> +{
> >>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
> >>> +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> >>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>> +	ATOM_PPLIB_THERMALCONTROLLER *controller;
> >>> +	struct amdgpu_i2c_bus_rec i2c_bus;
> >>> +	u16 data_offset;
> >>> +	u8 frev, crev;
> >>> +
> >>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
> >> index, NULL,
> >>> +				   &frev, &crev, &data_offset))
> >>> +		return;
> >>> +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> >>> +		(mode_info->atom_context->bios + data_offset);
> >>> +	controller = &power_table->sThermalController;
> >>> +
> >>> +	/* add the i2c bus for thermal/fan chip */
> >>> +	if (controller->ucType > 0) {
> >>> +		if (controller->ucFanParameters &
> >> ATOM_PP_FANPARAMETERS_NOFAN)
> >>> +			adev->pm.no_fan = true;
> >>> +		adev->pm.fan_pulses_per_revolution =
> >>> +			controller->ucFanParameters &
> >>
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> >> ASK;
> >>> +		if (adev->pm.fan_pulses_per_revolution) {
> >>> +			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
> >>> +			adev->pm.fan_max_rpm = controller-
> >>> ucFanMaxRPM;
> >>> +		}
> >>> +		if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_RV6XX;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_RV770) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_RV770;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EVERGREEN;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_SUMO) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_SUMO;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> >>> +			DRM_INFO("Internal thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
> >>> +		} else if (controller->ucType ==
> >> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> >>> +			DRM_INFO("External GPIO thermal controller %s fan
> >> control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EXTERNAL_GPIO;
> >>> +		} else if (controller->ucType ==
> >>> +
> >> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> >>> +			DRM_INFO("ADT7473 with internal thermal
> >> controller %s fan control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> >>> +		} else if (controller->ucType ==
> >>> +
> >> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> >>> +			DRM_INFO("EMC2103 with internal thermal
> >> controller %s fan control\n",
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> >>> +		} else if (controller->ucType <
> >> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> >>> +			DRM_INFO("Possible %s thermal controller at
> >> 0x%02x %s fan control\n",
> >>> +
> >> pp_lib_thermal_controller_names[controller->ucType],
> >>> +				 controller->ucI2cAddress >> 1,
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +			adev->pm.int_thermal_type =
> >> THERMAL_TYPE_EXTERNAL;
> >>> +			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
> >> controller->ucI2cLine);
> >>> +			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
> >> &i2c_bus);
> >>> +			if (adev->pm.i2c_bus) {
> >>> +				struct i2c_board_info info = { };
> >>> +				const char *name =
> >> pp_lib_thermal_controller_names[controller->ucType];
> >>> +				info.addr = controller->ucI2cAddress >> 1;
> >>> +				strlcpy(info.type, name, sizeof(info.type));
> >>> +				i2c_new_client_device(&adev->pm.i2c_bus-
> >>> adapter, &info);
> >>> +			}
> >>> +		} else {
> >>> +			DRM_INFO("Unknown thermal controller type %d at
> >> 0x%02x %s fan control\n",
> >>> +				 controller->ucType,
> >>> +				 controller->ucI2cAddress >> 1,
> >>> +				 (controller->ucFanParameters &
> >>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
> >> "without" : "with");
> >>> +		}
> >>> +	}
> >>> +}
> >>> +
> >>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
> u32
> >> idx)
> >>> +{
> >>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> +
> >>> +	if (idx < adev->pm.dpm.num_of_vce_states)
> >>> +		return &adev->pm.dpm.vce_states[idx];
> >>> +
> >>> +	return NULL;
> >>> +}
> >>> +
> >>> +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> >> amdgpu_device *adev,
> >>> +						     enum
> >> amd_pm_state_type dpm_state)
> >>> +{
> >>> +	int i;
> >>> +	struct amdgpu_ps *ps;
> >>> +	u32 ui_class;
> >>> +	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
> >>> +		true : false;
> >>> +
> >>> +	/* check if the vblank period is too short to adjust the mclk */
> >>> +	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
> >> {
> >>> +		if (amdgpu_dpm_vblank_too_short(adev))
> >>> +			single_display = false;
> >>> +	}
> >>> +
> >>> +	/* certain older asics have a separare 3D performance state,
> >>> +	 * so try that first if the user selected performance
> >>> +	 */
> >>> +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> >>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
> >>> +	/* balanced states don't exist at the moment */
> >>> +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> >>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> +
> >>> +restart_search:
> >>> +	/* Pick the best power state based on current conditions */
> >>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> >>> +		ps = &adev->pm.dpm.ps[i];
> >>> +		ui_class = ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> >>> +		switch (dpm_state) {
> >>> +		/* user states */
> >>> +		case POWER_STATE_TYPE_BATTERY:
> >>> +			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> >>> +				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> +					if (single_display)
> >>> +						return ps;
> >>> +				} else
> >>> +					return ps;
> >>> +			}
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_BALANCED:
> >>> +			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> >>> +				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> +					if (single_display)
> >>> +						return ps;
> >>> +				} else
> >>> +					return ps;
> >>> +			}
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_PERFORMANCE:
> >>> +			if (ui_class ==
> >> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> >>> +				if (ps->caps &
> >> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>> +					if (single_display)
> >>> +						return ps;
> >>> +				} else
> >>> +					return ps;
> >>> +			}
> >>> +			break;
> >>> +		/* internal states */
> >>> +		case POWER_STATE_TYPE_INTERNAL_UVD:
> >>> +			if (adev->pm.dpm.uvd_ps)
> >>> +				return adev->pm.dpm.uvd_ps;
> >>> +			else
> >>> +				break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>> +			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>> +			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>> +			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>> +			if (ps->class2 &
> >> ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_BOOT:
> >>> +			return adev->pm.dpm.boot_ps;
> >>> +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>> +			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_ULV:
> >>> +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>> +				return ps;
> >>> +			break;
> >>> +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>> +			if (ps->class &
> >> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>> +				return ps;
> >>> +			break;
> >>> +		default:
> >>> +			break;
> >>> +		}
> >>> +	}
> >>> +	/* use a fallback state if we didn't match */
> >>> +	switch (dpm_state) {
> >>> +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
> >>> +		goto restart_search;
> >>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>> +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>> +		if (adev->pm.dpm.uvd_ps) {
> >>> +			return adev->pm.dpm.uvd_ps;
> >>> +		} else {
> >>> +			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> +			goto restart_search;
> >>> +		}
> >>> +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> >>> +		goto restart_search;
> >>> +	case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>> +		dpm_state = POWER_STATE_TYPE_BATTERY;
> >>> +		goto restart_search;
> >>> +	case POWER_STATE_TYPE_BATTERY:
> >>> +	case POWER_STATE_TYPE_BALANCED:
> >>> +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>> +		goto restart_search;
> >>> +	default:
> >>> +		break;
> >>> +	}
> >>> +
> >>> +	return NULL;
> >>> +}
> >>> +
> >>> +int amdgpu_dpm_change_power_state_locked(void *handle)
> >>> +{
> >>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> +	struct amdgpu_ps *ps;
> >>> +	enum amd_pm_state_type dpm_state;
> >>> +	int ret;
> >>> +	bool equal = false;
> >>> +
> >>> +	/* if dpm init failed */
> >>> +	if (!adev->pm.dpm_enabled)
> >>> +		return 0;
> >>> +
> >>> +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> >>> +		/* add other state override checks here */
> >>> +		if ((!adev->pm.dpm.thermal_active) &&
> >>> +		    (!adev->pm.dpm.uvd_active))
> >>> +			adev->pm.dpm.state = adev->pm.dpm.user_state;
> >>> +	}
> >>> +	dpm_state = adev->pm.dpm.state;
> >>> +
> >>> +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> >>> +	if (ps)
> >>> +		adev->pm.dpm.requested_ps = ps;
> >>> +	else
> >>> +		return -EINVAL;
> >>> +
> >>> +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >>> print_power_state) {
> >>> +		printk("switching from power state:\n");
> >>> +		amdgpu_dpm_print_power_state(adev, adev-
> >>> pm.dpm.current_ps);
> >>> +		printk("switching to power state:\n");
> >>> +		amdgpu_dpm_print_power_state(adev, adev-
> >>> pm.dpm.requested_ps);
> >>> +	}
> >>> +
> >>> +	/* update whether vce is active */
> >>> +	ps->vce_active = adev->pm.dpm.vce_active;
> >>> +	if (adev->powerplay.pp_funcs->display_configuration_changed)
> >>> +		amdgpu_dpm_display_configuration_changed(adev);
> >>> +
> >>> +	ret = amdgpu_dpm_pre_set_power_state(adev);
> >>> +	if (ret)
> >>> +		return ret;
> >>> +
> >>> +	if (adev->powerplay.pp_funcs->check_state_equal) {
> >>> +		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
> >>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> >>> +			equal = false;
> >>> +	}
> >>> +
> >>> +	if (equal)
> >>> +		return 0;
> >>> +
> >>> +	if (adev->powerplay.pp_funcs->set_power_state)
> >>> +		adev->powerplay.pp_funcs->set_power_state(adev-
> >>> powerplay.pp_handle);
> >>> +
> >>> +	amdgpu_dpm_post_set_power_state(adev);
> >>> +
> >>> +	adev->pm.dpm.current_active_crtcs = adev-
> >>> pm.dpm.new_active_crtcs;
> >>> +	adev->pm.dpm.current_active_crtc_count = adev-
> >>> pm.dpm.new_active_crtc_count;
> >>> +
> >>> +	if (adev->powerplay.pp_funcs->force_performance_level) {
> >>> +		if (adev->pm.dpm.thermal_active) {
> >>> +			enum amd_dpm_forced_level level = adev-
> >>> pm.dpm.forced_level;
> >>> +			/* force low perf level for thermal */
> >>> +			amdgpu_dpm_force_performance_level(adev,
> >> AMD_DPM_FORCED_LEVEL_LOW);
> >>> +			/* save the user's level */
> >>> +			adev->pm.dpm.forced_level = level;
> >>> +		} else {
> >>> +			/* otherwise, user selected level */
> >>> +			amdgpu_dpm_force_performance_level(adev,
> >> adev->pm.dpm.forced_level);
> >>> +		}
> >>> +	}
> >>> +
> >>> +	return 0;
> >>> +}
> >>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>> new file mode 100644
> >>> index 000000000000..4adc765c8824
> >>> --- /dev/null
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>> @@ -0,0 +1,70 @@
> >>> +/*
> >>> + * Copyright 2021 Advanced Micro Devices, Inc.
> >>> + *
> >>> + * Permission is hereby granted, free of charge, to any person obtaining
> a
> >>> + * copy of this software and associated documentation files (the
> >> "Software"),
> >>> + * to deal in the Software without restriction, including without
> limitation
> >>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> >>> + * and/or sell copies of the Software, and to permit persons to whom
> the
> >>> + * Software is furnished to do so, subject to the following conditions:
> >>> + *
> >>> + * The above copyright notice and this permission notice shall be
> included
> >> in
> >>> + * all copies or substantial portions of the Software.
> >>> + *
> >>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> >> KIND, EXPRESS OR
> >>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> >> MERCHANTABILITY,
> >>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> >> NO EVENT SHALL
> >>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> >> DAMAGES OR
> >>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> >> OTHERWISE,
> >>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
> OR
> >> THE USE OR
> >>> + * OTHER DEALINGS IN THE SOFTWARE.
> >>> + *
> >>> + */
> >>> +#ifndef __LEGACY_DPM_H__
> >>> +#define __LEGACY_DPM_H__
> >>> +
> >>> +int amdgpu_atombios_get_memory_pll_dividers(struct
> amdgpu_device
> >> *adev,
> >>> +					    u32 clock,
> >>> +					    bool strobe_mode,
> >>> +					    struct atom_mpll_param
> >> *mpll_param);
> >>> +
> >>> +void amdgpu_atombios_set_engine_dram_timings(struct
> >> amdgpu_device *adev,
> >>> +					     u32 eng_clock, u32 mem_clock);
> >>> +
> >>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
> >> *adev,
> >>> +					  u16 *vddc, u16 *vddci, u16 *mvdd);
> >>> +
> >>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> u8
> >> voltage_type,
> >>> +			     u16 voltage_id, u16 *voltage);
> >>> +
> >>> +int
> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >> amdgpu_device *adev,
> >>> +						      u16 *voltage,
> >>> +						      u16 leakage_idx);
> >>> +
> >>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>> +			      u8 voltage_type,
> >>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> >>> +
> >>> +bool
> >>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>> +				u8 voltage_type, u8 voltage_mode);
> >>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> *adev,
> >>> +				      u8 voltage_type, u8 voltage_mode,
> >>> +				      struct atom_voltage_table
> >> *voltage_table);
> >>> +
> >>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> *adev,
> >>> +				      u8 module_index,
> >>> +				      struct atom_mc_reg_table *reg_table);
> >>> +
> >>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> >>> +void amdgpu_dpm_print_cap_info(u32 caps);
> >>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>> +				struct amdgpu_ps *rps);
> >>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> >>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
> >> *adev);
> >>> +void amdgpu_free_extended_power_table(struct amdgpu_device
> >> *adev);
> >>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> >>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
> u32
> >> idx);
> >>> +int amdgpu_dpm_change_power_state_locked(void *handle);
> >>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
> >>> +#endif
> >>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >> b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>> index 4f84d8b893f1..a2881c90d187 100644
> >>> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>> @@ -37,6 +37,7 @@
> >>>    #include <linux/math64.h>
> >>>    #include <linux/seq_file.h>
> >>>    #include <linux/firmware.h>
> >>> +#include <legacy_dpm.h>
> >>>
> >>>    #define MC_CG_ARB_FREQ_F0           0x0a
> >>>    #define MC_CG_ARB_FREQ_F1           0x0b
> >>> @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs
> si_dpm_funcs
> >> = {
> >>>    	.check_state_equal = &si_check_state_equal,
> >>>    	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >>>    	.read_sensor = &si_dpm_read_sensor,
> >>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
> >>>    };
> >>>
> >>>    static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
> >>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power
  2021-12-01  6:38       ` Lazar, Lijo
@ 2021-12-01  7:24         ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-01  7:24 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Wednesday, December 1, 2021 2:39 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 13/17] drm/amd/pm: do not expose the
> smu_context structure used internally in power
> 
> 
> 
> On 12/1/2021 11:09 AM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -----Original Message-----
> >> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> >> Sent: Tuesday, November 30, 2021 9:58 PM
> >> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig,
> Christian
> >> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> >> Subject: Re: [PATCH V2 13/17] drm/amd/pm: do not expose the
> >> smu_context structure used internally in power
> >>
> >>
> >>
> >> On 11/30/2021 1:12 PM, Evan Quan wrote:
> >>> This can cover the power implementation details. And as what did for
> >>> powerplay framework, we hook the smu_context to adev-
> >>> powerplay.pp_handle.
> >>>
> >>> Signed-off-by: Evan Quan <evan.quan@amd.com>
> >>> Change-Id: I3969c9f62a8b63dc6e4321a488d8f15022ffeb3d
> >>> ---
> >>>    drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  6 --
> >>>    .../gpu/drm/amd/include/kgd_pp_interface.h    |  9 +++
> >>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 51 ++++++++++------
> >>>    drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h       | 11 +---
> >>>    drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 60
> >> +++++++++++++------
> >>>    .../gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c |  9 +--
> >>>    .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   |  9 +--
> >>>    .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   |  9 +--
> >>>    .../gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c    |  4 +-
> >>>    .../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c    |  9 +--
> >>>    .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c    |  8 +--
> >>>    11 files changed, 111 insertions(+), 74 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >>> index c987813a4996..fefabd568483 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >>> @@ -99,7 +99,6 @@
> >>>    #include "amdgpu_gem.h"
> >>>    #include "amdgpu_doorbell.h"
> >>>    #include "amdgpu_amdkfd.h"
> >>> -#include "amdgpu_smu.h"
> >>>    #include "amdgpu_discovery.h"
> >>>    #include "amdgpu_mes.h"
> >>>    #include "amdgpu_umc.h"
> >>> @@ -950,11 +949,6 @@ struct amdgpu_device {
> >>>
> >>>    	/* powerplay */
> >>>    	struct amd_powerplay		powerplay;
> >>> -
> >>> -	/* smu */
> >>> -	struct smu_context		smu;
> >>> -
> >>> -	/* dpm */
> >>>    	struct amdgpu_pm		pm;
> >>>    	u32				cg_flags;
> >>>    	u32				pg_flags;
> >>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> index 7919e96e772b..da6a82430048 100644
> >>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>> @@ -25,6 +25,9 @@
> >>>    #define __KGD_PP_INTERFACE_H__
> >>>
> >>>    extern const struct amdgpu_ip_block_version pp_smu_ip_block;
> >>> +extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> >>> +extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> >>> +extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
> >>>
> >>>    enum smu_event_type {
> >>>    	SMU_EVENT_RESET_COMPLETE = 0,
> >>> @@ -244,6 +247,12 @@ enum pp_power_type
> >>>    	PP_PWR_TYPE_FAST,
> >>>    };
> >>>
> >>> +enum smu_ppt_limit_type
> >>> +{
> >>> +	SMU_DEFAULT_PPT_LIMIT = 0,
> >>> +	SMU_FAST_PPT_LIMIT,
> >>> +};
> >>> +
> >>
> >> This is a contradiction. If the entry point is dpm, this shouldn't be
> >> here and the external interface doesn't need to know about internal
> datatypes.
> > [Quan, Evan] This is needed by amdgpu_hwmon_show_power_label()
> from amdgpu_pm.c.
> > So, it has to be put into some place which can be accessed from outside(of
> power).
> > Then kgd_pp_interface.h is the right place.
> 
> The public data types are enum pp_power_type and enum
> pp_power_limit_level.
> 
> The first one tells about the type of power limits (fast/slow/sustained) and
> second one is about the min/max/default values for different limits.
> 
> To show the label, use the pp_power_type type.
[Quan, Evan] Thanks, missed the pp_power_type. Seems we defined two data structures for the same purpose.
Let me check and fix this.
> 
> >
> >>
> >>>    #define PP_GROUP_MASK        0xF0000000
> >>>    #define PP_GROUP_SHIFT       28
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> index 8f0ae58f4292..a5cbbf9367fe 100644
> >>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> @@ -31,6 +31,7 @@
> >>>    #include "amdgpu_display.h"
> >>>    #include "hwmgr.h"
> >>>    #include <linux/power_supply.h>
> >>> +#include "amdgpu_smu.h"
> >>>
> >>>    #define amdgpu_dpm_enable_bapm(adev, e) \
> >>>
> >>> ((adev)->powerplay.pp_funcs->enable_bapm((adev)-
> >>> powerplay.pp_handle,
> >>> (e))) @@ -213,7 +214,7 @@ int amdgpu_dpm_baco_reset(struct
> >>> amdgpu_device *adev)
> >>>
> >>>    bool amdgpu_dpm_is_mode1_reset_supported(struct
> amdgpu_device
> >> *adev)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (is_support_sw_smu(adev))
> >>>    		return smu_mode1_reset_is_support(smu); @@ -223,7
> >> +224,7 @@ bool
> >>> amdgpu_dpm_is_mode1_reset_supported(struct amdgpu_device
> *adev)
> >>>
> >>>    int amdgpu_dpm_mode1_reset(struct amdgpu_device *adev)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (is_support_sw_smu(adev))
> >>>    		return smu_mode1_reset(smu);
> >>> @@ -276,7 +277,7 @@ int amdgpu_dpm_set_df_cstate(struct
> >> amdgpu_device
> >>> *adev,
> >>>
> >>>    int amdgpu_dpm_allow_xgmi_power_down(struct amdgpu_device
> >> *adev, bool en)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (is_support_sw_smu(adev))
> >>>    		return smu_allow_xgmi_power_down(smu, en); @@ -341,7
> >> +342,7 @@
> >>> void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
> >>>    		mutex_unlock(&adev->pm.mutex);
> >>>
> >>>    		if (is_support_sw_smu(adev))
> >>> -			smu_set_ac_dc(&adev->smu);
> >>> +			smu_set_ac_dc(adev->powerplay.pp_handle);
> >>>    	}
> >>>    }
> >>>
> >>> @@ -423,15 +424,16 @@ int amdgpu_pm_load_smu_firmware(struct
> >>> amdgpu_device *adev, uint32_t *smu_versio
> >>>
> >>>    int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> >> enable)
> >>>    {
> >>> -	return smu_set_light_sbr(&adev->smu, enable);
> >>> +	return smu_set_light_sbr(adev->powerplay.pp_handle, enable);
> >>>    }
> >>>
> >>>    int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
> >> *adev, uint32_t size)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	int ret = 0;
> >>>
> >>> -	if (adev->smu.ppt_funcs && adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num)
> >>> -		ret = adev->smu.ppt_funcs-
> >>> send_hbm_bad_pages_num(&adev->smu, size);
> >>> +	if (is_support_sw_smu(adev))
> >>> +		ret = smu_send_hbm_bad_pages_num(smu, size);
> >>>
> >>>    	return ret;
> >>>    }
> >>> @@ -446,7 +448,7 @@ int amdgpu_dpm_get_dpm_freq_range(struct
> >>> amdgpu_device *adev,
> >>>
> >>>    	switch (type) {
> >>>    	case PP_SCLK:
> >>> -		return smu_get_dpm_freq_range(&adev->smu, SMU_SCLK,
> >> min, max);
> >>> +		return smu_get_dpm_freq_range(adev-
> >>> powerplay.pp_handle, SMU_SCLK,
> >>> +min, max);
> >>>    	default:
> >>>    		return -EINVAL;
> >>>    	}
> >>> @@ -457,12 +459,14 @@ int amdgpu_dpm_set_soft_freq_range(struct
> >> amdgpu_device *adev,
> >>>    				   uint32_t min,
> >>>    				   uint32_t max)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>>    	if (!is_support_sw_smu(adev))
> >>>    		return -EOPNOTSUPP;
> >>>
> >>>    	switch (type) {
> >>>    	case PP_SCLK:
> >>> -		return smu_set_soft_freq_range(&adev->smu, SMU_SCLK,
> >> min, max);
> >>> +		return smu_set_soft_freq_range(smu, SMU_SCLK, min,
> >> max);
> >>>    	default:
> >>>    		return -EINVAL;
> >>>    	}
> >>> @@ -470,33 +474,41 @@ int amdgpu_dpm_set_soft_freq_range(struct
> >>> amdgpu_device *adev,
> >>>
> >>>    int amdgpu_dpm_write_watermarks_table(struct amdgpu_device
> *adev)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>>    	if (!is_support_sw_smu(adev))
> >>>    		return 0;
> >>>
> >>> -	return smu_write_watermarks_table(&adev->smu);
> >>> +	return smu_write_watermarks_table(smu);
> >>>    }
> >>>
> >>>    int amdgpu_dpm_wait_for_event(struct amdgpu_device *adev,
> >>>    			      enum smu_event_type event,
> >>>    			      uint64_t event_arg)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>>    	if (!is_support_sw_smu(adev))
> >>>    		return -EOPNOTSUPP;
> >>>
> >>> -	return smu_wait_for_event(&adev->smu, event, event_arg);
> >>> +	return smu_wait_for_event(smu, event, event_arg);
> >>>    }
> >>>
> >>>    int amdgpu_dpm_get_status_gfxoff(struct amdgpu_device *adev,
> >> uint32_t *value)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>>    	if (!is_support_sw_smu(adev))
> >>>    		return -EOPNOTSUPP;
> >>>
> >>> -	return smu_get_status_gfxoff(&adev->smu, value);
> >>> +	return smu_get_status_gfxoff(smu, value);
> >>>    }
> >>>
> >>>    uint64_t amdgpu_dpm_get_thermal_throttling_counter(struct
> >> amdgpu_device *adev)
> >>>    {
> >>> -	return atomic64_read(&adev->smu.throttle_int_counter);
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>> +	return atomic64_read(&smu->throttle_int_counter);
> >>>    }
> >>>
> >>>    /* amdgpu_dpm_gfx_state_change - Handle gfx power state change
> >>> set @@ -518,10 +530,12 @@ void
> amdgpu_dpm_gfx_state_change(struct
> >> amdgpu_device *adev,
> >>>    int amdgpu_dpm_get_ecc_info(struct amdgpu_device *adev,
> >>>    			    void *umc_ecc)
> >>>    {
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>>    	if (!is_support_sw_smu(adev))
> >>>    		return -EOPNOTSUPP;
> >>>
> >>> -	return smu_get_ecc_info(&adev->smu, umc_ecc);
> >>> +	return smu_get_ecc_info(smu, umc_ecc);
> >>>    }
> >>>
> >>>    struct amd_vce_state *amdgpu_dpm_get_vce_clock_state(struct
> >>> amdgpu_device *adev, @@ -919,9 +933,10 @@ int
> >> amdgpu_dpm_get_smu_prv_buf_details(struct amdgpu_device *adev,
> >>>    int amdgpu_dpm_is_overdrive_supported(struct amdgpu_device
> *adev)
> >>>    {
> >>>    	struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>> -	if ((is_support_sw_smu(adev) && adev->smu.od_enabled) ||
> >>> -	    (is_support_sw_smu(adev) && adev->smu.is_apu) ||
> >>> +	if ((is_support_sw_smu(adev) && smu->od_enabled) ||
> >>> +	    (is_support_sw_smu(adev) && smu->is_apu) ||
> >>>    		(!is_support_sw_smu(adev) && hwmgr->od_enabled))
> >>>    		return true;
> >>>
> >>> @@ -944,7 +959,9 @@ int amdgpu_dpm_set_pp_table(struct
> >> amdgpu_device
> >>> *adev,
> >>>
> >>>    int amdgpu_dpm_get_num_cpu_cores(struct amdgpu_device *adev)
> >>>    {
> >>> -	return adev->smu.cpu_core_num;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>> +	return smu->cpu_core_num;
> >>>    }
> >>>
> >>>    void amdgpu_dpm_stb_debug_fs_init(struct amdgpu_device *adev)
> >>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> index 29791bb21fba..f44139b415b4 100644
> >>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_smu.h
> >>> @@ -205,12 +205,6 @@ enum smu_power_src_type
> >>>    	SMU_POWER_SOURCE_COUNT,
> >>>    };
> >>>
> >>> -enum smu_ppt_limit_type
> >>> -{
> >>> -	SMU_DEFAULT_PPT_LIMIT = 0,
> >>> -	SMU_FAST_PPT_LIMIT,
> >>> -};
> >>> -
> >>>    enum smu_ppt_limit_level
> >>>    {
> >>>    	SMU_PPT_LIMIT_MIN = -1,
> >>> @@ -1389,10 +1383,6 @@ int smu_mode1_reset(struct smu_context
> >> *smu);
> >>>
> >>>    extern const struct amd_ip_funcs smu_ip_funcs;
> >>>
> >>> -extern const struct amdgpu_ip_block_version smu_v11_0_ip_block;
> >>> -extern const struct amdgpu_ip_block_version smu_v12_0_ip_block;
> >>> -extern const struct amdgpu_ip_block_version smu_v13_0_ip_block;
> >>> -
> >>>    bool is_support_sw_smu(struct amdgpu_device *adev);
> >>>    bool is_support_cclk_dpm(struct amdgpu_device *adev);
> >>>    int smu_write_watermarks_table(struct smu_context *smu); @@
> >>> -1416,6
> >>> +1406,7 @@ int smu_wait_for_event(struct smu_context *smu, enum
> >> smu_event_type event,
> >>>    int smu_get_ecc_info(struct smu_context *smu, void *umc_ecc);
> >>>    int smu_stb_collect_info(struct smu_context *smu, void *buff,
> >>> uint32_t
> >> size);
> >>>    void amdgpu_smu_stb_debug_fs_init(struct amdgpu_device *adev);
> >>> +int smu_send_hbm_bad_pages_num(struct smu_context *smu,
> uint32_t
> >>> +size);
> >>>
> >>>    #endif
> >>>    #endif
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> index eaed5aba7547..2c3fd3cfef05 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> >>> @@ -468,7 +468,7 @@ bool is_support_sw_smu(struct amdgpu_device
> >> *adev)
> >>>
> >>>    bool is_support_cclk_dpm(struct amdgpu_device *adev)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (!smu_feature_is_enabled(smu, SMU_FEATURE_CCLK_DPM_BIT))
> >>>    		return false;
> >>> @@ -572,7 +572,7 @@ static int
> >>> smu_get_driver_allowed_feature_mask(struct smu_context *smu)
> >>>
> >>>    static int smu_set_funcs(struct amdgpu_device *adev)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (adev->pm.pp_feature & PP_OVERDRIVE_MASK)
> >>>    		smu->od_enabled = true;
> >>> @@ -624,7 +624,11 @@ static int smu_set_funcs(struct amdgpu_device
> >> *adev)
> >>>    static int smu_early_init(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu;
> >>> +
> >>> +	smu = kzalloc(sizeof(struct smu_context), GFP_KERNEL);
> >>> +	if (!smu)
> >>> +		return -ENOMEM;
> >>>
> >>>    	smu->adev = adev;
> >>>    	smu->pm_enabled = !!amdgpu_dpm;
> >>> @@ -684,7 +688,7 @@ static int smu_set_default_dpm_table(struct
> >> smu_context *smu)
> >>>    static int smu_late_init(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	int ret = 0;
> >>>
> >>>    	smu_set_fine_grain_gfx_freq_parameters(smu);
> >>> @@ -730,7 +734,7 @@ static int smu_late_init(void *handle)
> >>>
> >>>    	smu_get_fan_parameters(smu);
> >>>
> >>> -	smu_handle_task(&adev->smu,
> >>> +	smu_handle_task(smu,
> >>>    			smu->smu_dpm.dpm_level,
> >>>    			AMD_PP_TASK_COMPLETE_INIT,
> >>>    			false);
> >>> @@ -1020,7 +1024,7 @@ static void smu_interrupt_work_fn(struct
> >> work_struct *work)
> >>>    static int smu_sw_init(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	int ret;
> >>>
> >>>    	smu->pool_size = adev->pm.smu_prv_buffer_size; @@ -1095,7
> >> +1099,7
> >>> @@ static int smu_sw_init(void *handle)
> >>>    static int smu_sw_fini(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	int ret;
> >>>
> >>>    	ret = smu_smc_table_sw_fini(smu); @@ -1330,7 +1334,7 @@ static
> >>> int smu_hw_init(void *handle)
> >>>    {
> >>>    	int ret;
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev))
> >> {
> >>>    		smu->pm_enabled = false;
> >>> @@ -1344,10 +1348,10 @@ static int smu_hw_init(void *handle)
> >>>    	}
> >>>
> >>>    	if (smu->is_apu) {
> >>> -		smu_powergate_sdma(&adev->smu, false);
> >>> +		smu_powergate_sdma(smu, false);
> >>>    		smu_dpm_set_vcn_enable(smu, true);
> >>>    		smu_dpm_set_jpeg_enable(smu, true);
> >>> -		smu_set_gfx_cgpg(&adev->smu, true);
> >>> +		smu_set_gfx_cgpg(smu, true);
> >>>    	}
> >>>
> >>>    	if (!smu->pm_enabled)
> >>> @@ -1501,13 +1505,13 @@ static int smu_smc_hw_cleanup(struct
> >> smu_context *smu)
> >>>    static int smu_hw_fini(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> >>>    		return 0;
> >>>
> >>>    	if (smu->is_apu) {
> >>> -		smu_powergate_sdma(&adev->smu, true);
> >>> +		smu_powergate_sdma(smu, true);
> >>>    	}
> >>>
> >>>    	smu_dpm_set_vcn_enable(smu, false); @@ -1524,6 +1528,14 @@
> >> static
> >>> int smu_hw_fini(void *handle)
> >>>    	return smu_smc_hw_cleanup(smu);
> >>>    }
> >>>
> >>> +static void smu_late_fini(void *handle) {
> >>> +	struct amdgpu_device *adev = handle;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +
> >>> +	kfree(smu);
> >>> +}
> >>> +
> >>
> >> This doesn't look related to this change.
> > [Quan, Evan] "smu" is updated as dynamically allocated. We need to find a
> place to get it freed.
> > As did in powerplay framework, ->late_fini is the right place.
> 
> Thanks, missed the change for dynamic allocation.
> 
> >>
> >>>    static int smu_reset(struct smu_context *smu)
> >>>    {
> >>>    	struct amdgpu_device *adev = smu->adev; @@ -1551,7 +1563,7 @@
> >>> static int smu_reset(struct smu_context *smu)
> >>>    static int smu_suspend(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	int ret;
> >>>
> >>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> >> @@
> >>> -1570,7 +1582,7 @@ static int smu_suspend(void *handle)
> >>>
> >>>    	/* skip CGPG when in S0ix */
> >>>    	if (smu->is_apu && !adev->in_s0ix)
> >>> -		smu_set_gfx_cgpg(&adev->smu, false);
> >>> +		smu_set_gfx_cgpg(smu, false);
> >>>
> >>>    	return 0;
> >>>    }
> >>> @@ -1579,7 +1591,7 @@ static int smu_resume(void *handle)
> >>>    {
> >>>    	int ret;
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (amdgpu_sriov_vf(adev)&& !amdgpu_sriov_is_pp_one_vf(adev))
> >>>    		return 0;
> >>> @@ -1602,7 +1614,7 @@ static int smu_resume(void *handle)
> >>>    	}
> >>>
> >>>    	if (smu->is_apu)
> >>> -		smu_set_gfx_cgpg(&adev->smu, true);
> >>> +		smu_set_gfx_cgpg(smu, true);
> >>>
> >>>    	smu->disable_uclk_switch = 0;
> >>>
> >>> @@ -2134,6 +2146,7 @@ const struct amd_ip_funcs smu_ip_funcs = {
> >>>    	.sw_fini = smu_sw_fini,
> >>>    	.hw_init = smu_hw_init,
> >>>    	.hw_fini = smu_hw_fini,
> >>> +	.late_fini = smu_late_fini,
> >>>    	.suspend = smu_suspend,
> >>>    	.resume = smu_resume,
> >>>    	.is_idle = NULL,
> >>> @@ -3198,7 +3211,7 @@ int smu_stb_collect_info(struct smu_context
> >> *smu, void *buf, uint32_t size)
> >>>    static int smu_stb_debugfs_open(struct inode *inode, struct file *filp)
> >>>    {
> >>>    	struct amdgpu_device *adev = filp->f_inode->i_private;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	unsigned char *buf;
> >>>    	int r;
> >>>
> >>> @@ -3223,7 +3236,7 @@ static ssize_t smu_stb_debugfs_read(struct
> >>> file
> >> *filp, char __user *buf, size_t
> >>>    				loff_t *pos)
> >>>    {
> >>>    	struct amdgpu_device *adev = filp->f_inode->i_private;
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>
> >>>    	if (!filp->private_data)
> >>> @@ -3264,7 +3277,7 @@ void amdgpu_smu_stb_debug_fs_init(struct
> >> amdgpu_device *adev)
> >>>    {
> >>>    #if defined(CONFIG_DEBUG_FS)
> >>>
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>
> >>>    	if (!smu->stb_context.stb_buf_size)
> >>>    		return;
> >>> @@ -3276,5 +3289,14 @@ void amdgpu_smu_stb_debug_fs_init(struct
> >> amdgpu_device *adev)
> >>>    			    &smu_stb_debugfs_fops,
> >>>    			    smu->stb_context.stb_buf_size);
> >>>    #endif
> >>> +}
> >>> +
> >>> +int smu_send_hbm_bad_pages_num(struct smu_context *smu,
> uint32_t
> >>> +size) {
> >>> +	int ret = 0;
> >>> +
> >>> +	if (smu->ppt_funcs->send_hbm_bad_pages_num)
> >>> +		ret = smu->ppt_funcs->send_hbm_bad_pages_num(smu,
> >> size);
> >>>
> >>> +	return ret;
> >>
> >> This also looks unrelated.
> > [Quan, Evan] This was moved from amdgpu_dpm.c to here
> (amdgpu_smu.c).
> > As smu_context is now an internal data structure for swsmu framework.
> > Then the accessing for smu->ppt_funcs should be launched from
> amdgpu_smu.c.
> 
> May be this change can go together with the corresponding API refactor
> change.
[Quan, Evan] Yeah, it should work. Will do that.

BR
Evan
> 
> Thanks,
> Lijo
> 
> >
> > BR
> > Evan
> >>
> >> Thanks,
> >> Lijo
> >>
> >>>    }
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> >>> index 05defeee0c87..a03bbd2a7aa0 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
> >>> @@ -2082,7 +2082,8 @@ static int arcturus_i2c_xfer(struct
> >>> i2c_adapter
> >> *i2c_adap,
> >>>    			     struct i2c_msg *msg, int num_msgs)
> >>>    {
> >>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> >>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +	struct smu_table_context *smu_table = &smu->smu_table;
> >>>    	struct smu_table *table = &smu_table->driver_table;
> >>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >>>    	int i, j, r, c;
> >>> @@ -2128,9 +2129,9 @@ static int arcturus_i2c_xfer(struct
> >>> i2c_adapter
> >> *i2c_adap,
> >>>    			}
> >>>    		}
> >>>    	}
> >>> -	mutex_lock(&adev->smu.mutex);
> >>> -	r = smu_cmn_update_table(&adev->smu,
> >> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> >>> -	mutex_unlock(&adev->smu.mutex);
> >>> +	mutex_lock(&smu->mutex);
> >>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> >> req, true);
> >>> +	mutex_unlock(&smu->mutex);
> >>>    	if (r)
> >>>    		goto fail;
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> >>> index 2bb7816b245a..37e11716e919 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> >>> @@ -2779,7 +2779,8 @@ static int navi10_i2c_xfer(struct i2c_adapter
> >> *i2c_adap,
> >>>    			   struct i2c_msg *msg, int num_msgs)
> >>>    {
> >>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> >>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +	struct smu_table_context *smu_table = &smu->smu_table;
> >>>    	struct smu_table *table = &smu_table->driver_table;
> >>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >>>    	int i, j, r, c;
> >>> @@ -2825,9 +2826,9 @@ static int navi10_i2c_xfer(struct i2c_adapter
> >> *i2c_adap,
> >>>    			}
> >>>    		}
> >>>    	}
> >>> -	mutex_lock(&adev->smu.mutex);
> >>> -	r = smu_cmn_update_table(&adev->smu,
> >> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> >>> -	mutex_unlock(&adev->smu.mutex);
> >>> +	mutex_lock(&smu->mutex);
> >>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> >> req, true);
> >>> +	mutex_unlock(&smu->mutex);
> >>>    	if (r)
> >>>    		goto fail;
> >>>
> >>> diff --git
> a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> index 777f717c37ae..6a5064f4ea86 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> >>> @@ -3459,7 +3459,8 @@ static int sienna_cichlid_i2c_xfer(struct
> >> i2c_adapter *i2c_adap,
> >>>    				   struct i2c_msg *msg, int num_msgs)
> >>>    {
> >>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> >>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +	struct smu_table_context *smu_table = &smu->smu_table;
> >>>    	struct smu_table *table = &smu_table->driver_table;
> >>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >>>    	int i, j, r, c;
> >>> @@ -3505,9 +3506,9 @@ static int sienna_cichlid_i2c_xfer(struct
> >> i2c_adapter *i2c_adap,
> >>>    			}
> >>>    		}
> >>>    	}
> >>> -	mutex_lock(&adev->smu.mutex);
> >>> -	r = smu_cmn_update_table(&adev->smu,
> >> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> >>> -	mutex_unlock(&adev->smu.mutex);
> >>> +	mutex_lock(&smu->mutex);
> >>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> >> req, true);
> >>> +	mutex_unlock(&smu->mutex);
> >>>    	if (r)
> >>>    		goto fail;
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> >>> index 28b7c0562b99..2a53b5b1d261 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
> >>> @@ -1372,7 +1372,7 @@ static int smu_v11_0_set_irq_state(struct
> >> amdgpu_device *adev,
> >>>    				   unsigned tyep,
> >>>    				   enum amdgpu_interrupt_state state)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	uint32_t low, high;
> >>>    	uint32_t val = 0;
> >>>
> >>> @@ -1441,7 +1441,7 @@ static int smu_v11_0_irq_process(struct
> >> amdgpu_device *adev,
> >>>    				 struct amdgpu_irq_src *source,
> >>>    				 struct amdgpu_iv_entry *entry)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	uint32_t client_id = entry->client_id;
> >>>    	uint32_t src_id = entry->src_id;
> >>>    	/*
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> >>> index 6e781cee8bb6..3c82f5455f88 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
> >>> @@ -1484,7 +1484,8 @@ static int aldebaran_i2c_xfer(struct
> >>> i2c_adapter
> >> *i2c_adap,
> >>>    			      struct i2c_msg *msg, int num_msgs)
> >>>    {
> >>>    	struct amdgpu_device *adev = to_amdgpu_device(i2c_adap);
> >>> -	struct smu_table_context *smu_table = &adev->smu.smu_table;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>> +	struct smu_table_context *smu_table = &smu->smu_table;
> >>>    	struct smu_table *table = &smu_table->driver_table;
> >>>    	SwI2cRequest_t *req, *res = (SwI2cRequest_t *)table->cpu_addr;
> >>>    	int i, j, r, c;
> >>> @@ -1530,9 +1531,9 @@ static int aldebaran_i2c_xfer(struct
> >>> i2c_adapter
> >> *i2c_adap,
> >>>    			}
> >>>    		}
> >>>    	}
> >>> -	mutex_lock(&adev->smu.mutex);
> >>> -	r = smu_cmn_update_table(&adev->smu,
> >> SMU_TABLE_I2C_COMMANDS, 0, req, true);
> >>> -	mutex_unlock(&adev->smu.mutex);
> >>> +	mutex_lock(&smu->mutex);
> >>> +	r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0,
> >> req, true);
> >>> +	mutex_unlock(&smu->mutex);
> >>>    	if (r)
> >>>    		goto fail;
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> >>> b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> >>> index 55421ea622fb..4ed01e9d88fb 100644
> >>> --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> >>> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> >>> @@ -1195,7 +1195,7 @@ static int smu_v13_0_set_irq_state(struct
> >> amdgpu_device *adev,
> >>>    				   unsigned tyep,
> >>>    				   enum amdgpu_interrupt_state state)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	uint32_t low, high;
> >>>    	uint32_t val = 0;
> >>>
> >>> @@ -1270,7 +1270,7 @@ static int smu_v13_0_irq_process(struct
> >> amdgpu_device *adev,
> >>>    				 struct amdgpu_irq_src *source,
> >>>    				 struct amdgpu_iv_entry *entry)
> >>>    {
> >>> -	struct smu_context *smu = &adev->smu;
> >>> +	struct smu_context *smu = adev->powerplay.pp_handle;
> >>>    	uint32_t client_id = entry->client_id;
> >>>    	uint32_t src_id = entry->src_id;
> >>>    	/*
> >>> @@ -1316,11 +1316,11 @@ static int smu_v13_0_irq_process(struct
> >> amdgpu_device *adev,
> >>>    			switch (ctxid) {
> >>>    			case 0x3:
> >>>    				dev_dbg(adev->dev, "Switched to AC
> >> mode!\n");
> >>> -				smu_v13_0_ack_ac_dc_interrupt(&adev-
> >>> smu);
> >>> +				smu_v13_0_ack_ac_dc_interrupt(smu);
> >>>    				break;
> >>>    			case 0x4:
> >>>    				dev_dbg(adev->dev, "Switched to DC
> >> mode!\n");
> >>> -				smu_v13_0_ack_ac_dc_interrupt(&adev-
> >>> smu);
> >>> +				smu_v13_0_ack_ac_dc_interrupt(smu);
> >>>    				break;
> >>>    			case 0x7:
> >>>    				/*
> >>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-12-01  7:17         ` Quan, Evan
@ 2021-12-01  7:36           ` Lazar, Lijo
  2021-12-02  1:24             ` Quan, Evan
  0 siblings, 1 reply; 44+ messages in thread
From: Lazar, Lijo @ 2021-12-01  7:36 UTC (permalink / raw)
  To: Quan, Evan, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian



On 12/1/2021 12:47 PM, Quan, Evan wrote:
> [AMD Official Use Only]
> 
> 
> 
>> -----Original Message-----
>> From: Lazar, Lijo <Lijo.Lazar@amd.com>
>> Sent: Wednesday, December 1, 2021 12:19 PM
>> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
>> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
>> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those
>> APIs used only by legacy ASICs(si/kv)
>>
>>
>>
>> On 12/1/2021 8:43 AM, Quan, Evan wrote:
>>> [AMD Official Use Only]
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Lazar, Lijo <Lijo.Lazar@amd.com>
>>>> Sent: Tuesday, November 30, 2021 9:21 PM
>>>> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
>>>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig,
>> Christian
>>>> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
>>>> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for
>> those
>>>> APIs used only by legacy ASICs(si/kv)
>>>>
>>>>
>>>>
>>>> On 11/30/2021 1:12 PM, Evan Quan wrote:
>>>>> Those APIs are used only by legacy ASICs(si/kv). They cannot be
>>>>> shared by other ASICs. So, we create a new holder for them.
>>>>>
>>>>> Signed-off-by: Evan Quan <evan.quan@amd.com>
>>>>> Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
>>>>> --
>>>>> v1->v2:
>>>>>      - move other APIs used by si/kv in amdgpu_atombios.c to the new
>>>>>        holder also(Alex)
>>>>> ---
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
>>>>>     .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
>>>>>     drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
>>>>>     drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
>>>>>     drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
>>>>>     drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
>>>>>     drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453
>>>> +++++++++++++++++
>>>>>     drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
>>>>>     drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
>>>>>     10 files changed, 1534 insertions(+), 1470 deletions(-)
>>>>>     create mode 100644
>> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>>>>     create mode 100644
>> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>>>> index 12a6b1c99c93..f2e447212e62 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
>>>>> @@ -1083,427 +1083,6 @@ int
>>>> amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
>>>>>     	return 0;
>>>>>     }
>>>>>
>>>>> -int amdgpu_atombios_get_memory_pll_dividers(struct
>> amdgpu_device
>>>> *adev,
>>>>> -					    u32 clock,
>>>>> -					    bool strobe_mode,
>>>>> -					    struct atom_mpll_param
>>>> *mpll_param)
>>>>> -{
>>>>> -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
>>>>> -	int index = GetIndexIntoMasterTable(COMMAND,
>>>> ComputeMemoryClockParam);
>>>>> -	u8 frev, crev;
>>>>> -
>>>>> -	memset(&args, 0, sizeof(args));
>>>>> -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
>>>>> -
>>>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
>>>>> mode_info.atom_context, index, &frev, &crev))
>>>>> -		return -EINVAL;
>>>>> -
>>>>> -	switch (frev) {
>>>>> -	case 2:
>>>>> -		switch (crev) {
>>>>> -		case 1:
>>>>> -			/* SI */
>>>>> -			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
>>>>> -			args.ucInputFlag = 0;
>>>>> -			if (strobe_mode)
>>>>> -				args.ucInputFlag |=
>>>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
>>>>> -
>>>>> -			amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> -
>>>>> -			mpll_param->clkfrac =
>>>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
>>>>> -			mpll_param->clkf =
>>>> le16_to_cpu(args.ulFbDiv.usFbDiv);
>>>>> -			mpll_param->post_div = args.ucPostDiv;
>>>>> -			mpll_param->dll_speed = args.ucDllSpeed;
>>>>> -			mpll_param->bwcntl = args.ucBWCntl;
>>>>> -			mpll_param->vco_mode =
>>>>> -				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
>>>>> -			mpll_param->yclk_sel =
>>>>> -				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
>>>>> -			mpll_param->qdr =
>>>>> -				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
>>>>> -			mpll_param->half_rate =
>>>>> -				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
>>>>> -			break;
>>>>> -		default:
>>>>> -			return -EINVAL;
>>>>> -		}
>>>>> -		break;
>>>>> -	default:
>>>>> -		return -EINVAL;
>>>>> -	}
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -void amdgpu_atombios_set_engine_dram_timings(struct
>> amdgpu_device
>>>> *adev,
>>>>> -					     u32 eng_clock, u32 mem_clock)
>>>>> -{
>>>>> -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
>>>>> -	int index = GetIndexIntoMasterTable(COMMAND,
>>>> DynamicMemorySettings);
>>>>> -	u32 tmp;
>>>>> -
>>>>> -	memset(&args, 0, sizeof(args));
>>>>> -
>>>>> -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
>>>>> -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
>>>>> -
>>>>> -	args.ulTargetEngineClock = cpu_to_le32(tmp);
>>>>> -	if (mem_clock)
>>>>> -		args.sReserved.ulClock = cpu_to_le32(mem_clock &
>>>> SET_CLOCK_FREQ_MASK);
>>>>> -
>>>>> -	amdgpu_atom_execute_table(adev->mode_info.atom_context,
>>>> index, (uint32_t *)&args);
>>>>> -}
>>>>> -
>>>>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>>>> *adev,
>>>>> -					  u16 *vddc, u16 *vddci, u16 *mvdd)
>>>>> -{
>>>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
>>>>> -	u8 frev, crev;
>>>>> -	u16 data_offset;
>>>>> -	union firmware_info *firmware_info;
>>>>> -
>>>>> -	*vddc = 0;
>>>>> -	*vddci = 0;
>>>>> -	*mvdd = 0;
>>>>> -
>>>>> -	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> -				   &frev, &crev, &data_offset)) {
>>>>> -		firmware_info =
>>>>> -			(union firmware_info *)(mode_info->atom_context-
>>>>> bios +
>>>>> -						data_offset);
>>>>> -		*vddc = le16_to_cpu(firmware_info-
>>>>> info_14.usBootUpVDDCVoltage);
>>>>> -		if ((frev == 2) && (crev >= 2)) {
>>>>> -			*vddci = le16_to_cpu(firmware_info-
>>>>> info_22.usBootUpVDDCIVoltage);
>>>>> -			*mvdd = le16_to_cpu(firmware_info-
>>>>> info_22.usBootUpMVDDCVoltage);
>>>>> -		}
>>>>> -	}
>>>>> -}
>>>>> -
>>>>> -union set_voltage {
>>>>> -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
>>>>> -	struct _SET_VOLTAGE_PARAMETERS v1;
>>>>> -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
>>>>> -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
>>>>> -};
>>>>> -
>>>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>>>> voltage_type,
>>>>> -			     u16 voltage_id, u16 *voltage)
>>>>> -{
>>>>> -	union set_voltage args;
>>>>> -	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
>>>>> -	u8 frev, crev;
>>>>> -
>>>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
>>>>> mode_info.atom_context, index, &frev, &crev))
>>>>> -		return -EINVAL;
>>>>> -
>>>>> -	switch (crev) {
>>>>> -	case 1:
>>>>> -		return -EINVAL;
>>>>> -	case 2:
>>>>> -		args.v2.ucVoltageType =
>>>> SET_VOLTAGE_GET_MAX_VOLTAGE;
>>>>> -		args.v2.ucVoltageMode = 0;
>>>>> -		args.v2.usVoltageLevel = 0;
>>>>> -
>>>>> -		amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> -
>>>>> -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
>>>>> -		break;
>>>>> -	case 3:
>>>>> -		args.v3.ucVoltageType = voltage_type;
>>>>> -		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
>>>>> -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
>>>>> -
>>>>> -		amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> -
>>>>> -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
>>>>> -		break;
>>>>> -	default:
>>>>> -		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
>>>>> -		return -EINVAL;
>>>>> -	}
>>>>> -
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -int
>> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>>>> amdgpu_device *adev,
>>>>> -						      u16 *voltage,
>>>>> -						      u16 leakage_idx)
>>>>> -{
>>>>> -	return amdgpu_atombios_get_max_vddc(adev,
>>>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
>>>>> -}
>>>>> -
>>>>> -union voltage_object_info {
>>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
>>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
>>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
>>>>> -};
>>>>> -
>>>>> -union voltage_object {
>>>>> -	struct _ATOM_VOLTAGE_OBJECT v1;
>>>>> -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
>>>>> -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
>>>>> -};
>>>>> -
>>>>> -
>>>>> -static ATOM_VOLTAGE_OBJECT_V3
>>>>
>> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
>>>> NFO_V3_1 *v3,
>>>>> -									u8
>>>> voltage_type, u8 voltage_mode)
>>>>> -{
>>>>> -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
>>>>> -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
>>>> asVoltageObj[0]);
>>>>> -	u8 *start = (u8 *)v3;
>>>>> -
>>>>> -	while (offset < size) {
>>>>> -		ATOM_VOLTAGE_OBJECT_V3 *vo =
>>>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
>>>>> -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
>>>> voltage_type) &&
>>>>> -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
>>>> voltage_mode))
>>>>> -			return vo;
>>>>> -		offset += le16_to_cpu(vo-
>>>>> asGpioVoltageObj.sHeader.usSize);
>>>>> -	}
>>>>> -	return NULL;
>>>>> -}
>>>>> -
>>>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>>>> -			      u8 voltage_type,
>>>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
>>>>> -{
>>>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> -	u8 frev, crev;
>>>>> -	u16 data_offset, size;
>>>>> -	union voltage_object_info *voltage_info;
>>>>> -	union voltage_object *voltage_object = NULL;
>>>>> -
>>>>> -	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> -				   &frev, &crev, &data_offset)) {
>>>>> -		voltage_info = (union voltage_object_info *)
>>>>> -			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> -
>>>>> -		switch (frev) {
>>>>> -		case 3:
>>>>> -			switch (crev) {
>>>>> -			case 1:
>>>>> -				voltage_object = (union voltage_object *)
>>>>> -
>>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> -
>>>> voltage_type,
>>>>> -
>>>> VOLTAGE_OBJ_SVID2);
>>>>> -				if (voltage_object) {
>>>>> -					*svd_gpio_id = voltage_object-
>>>>> v3.asSVID2Obj.ucSVDGpioId;
>>>>> -					*svc_gpio_id = voltage_object-
>>>>> v3.asSVID2Obj.ucSVCGpioId;
>>>>> -				} else {
>>>>> -					return -EINVAL;
>>>>> -				}
>>>>> -				break;
>>>>> -			default:
>>>>> -				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> -				return -EINVAL;
>>>>> -			}
>>>>> -			break;
>>>>> -		default:
>>>>> -			DRM_ERROR("unknown voltage object table\n");
>>>>> -			return -EINVAL;
>>>>> -		}
>>>>> -
>>>>> -	}
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -bool
>>>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>>>> -				u8 voltage_type, u8 voltage_mode)
>>>>> -{
>>>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> -	u8 frev, crev;
>>>>> -	u16 data_offset, size;
>>>>> -	union voltage_object_info *voltage_info;
>>>>> -
>>>>> -	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> -				   &frev, &crev, &data_offset)) {
>>>>> -		voltage_info = (union voltage_object_info *)
>>>>> -			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> -
>>>>> -		switch (frev) {
>>>>> -		case 3:
>>>>> -			switch (crev) {
>>>>> -			case 1:
>>>>> -				if
>>>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> -
>>>> voltage_type, voltage_mode))
>>>>> -					return true;
>>>>> -				break;
>>>>> -			default:
>>>>> -				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> -				return false;
>>>>> -			}
>>>>> -			break;
>>>>> -		default:
>>>>> -			DRM_ERROR("unknown voltage object table\n");
>>>>> -			return false;
>>>>> -		}
>>>>> -
>>>>> -	}
>>>>> -	return false;
>>>>> -}
>>>>> -
>>>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>>>> -				      u8 voltage_type, u8 voltage_mode,
>>>>> -				      struct atom_voltage_table *voltage_table)
>>>>> -{
>>>>> -	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> -	u8 frev, crev;
>>>>> -	u16 data_offset, size;
>>>>> -	int i;
>>>>> -	union voltage_object_info *voltage_info;
>>>>> -	union voltage_object *voltage_object = NULL;
>>>>> -
>>>>> -	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> -				   &frev, &crev, &data_offset)) {
>>>>> -		voltage_info = (union voltage_object_info *)
>>>>> -			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> -
>>>>> -		switch (frev) {
>>>>> -		case 3:
>>>>> -			switch (crev) {
>>>>> -			case 1:
>>>>> -				voltage_object = (union voltage_object *)
>>>>> -
>>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> -
>>>> voltage_type, voltage_mode);
>>>>> -				if (voltage_object) {
>>>>> -					ATOM_GPIO_VOLTAGE_OBJECT_V3
>>>> *gpio =
>>>>> -						&voltage_object-
>>>>> v3.asGpioVoltageObj;
>>>>> -					VOLTAGE_LUT_ENTRY_V2 *lut;
>>>>> -					if (gpio->ucGpioEntryNum >
>>>> MAX_VOLTAGE_ENTRIES)
>>>>> -						return -EINVAL;
>>>>> -					lut = &gpio->asVolGpioLut[0];
>>>>> -					for (i = 0; i < gpio->ucGpioEntryNum;
>>>> i++) {
>>>>> -						voltage_table-
>>>>> entries[i].value =
>>>>> -							le16_to_cpu(lut-
>>>>> usVoltageValue);
>>>>> -						voltage_table-
>>>>> entries[i].smio_low =
>>>>> -							le32_to_cpu(lut-
>>>>> ulVoltageId);
>>>>> -						lut =
>>>> (VOLTAGE_LUT_ENTRY_V2 *)
>>>>> -							((u8 *)lut +
>>>> sizeof(VOLTAGE_LUT_ENTRY_V2));
>>>>> -					}
>>>>> -					voltage_table->mask_low =
>>>> le32_to_cpu(gpio->ulGpioMaskVal);
>>>>> -					voltage_table->count = gpio-
>>>>> ucGpioEntryNum;
>>>>> -					voltage_table->phase_delay = gpio-
>>>>> ucPhaseDelay;
>>>>> -					return 0;
>>>>> -				}
>>>>> -				break;
>>>>> -			default:
>>>>> -				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> -				return -EINVAL;
>>>>> -			}
>>>>> -			break;
>>>>> -		default:
>>>>> -			DRM_ERROR("unknown voltage object table\n");
>>>>> -			return -EINVAL;
>>>>> -		}
>>>>> -	}
>>>>> -	return -EINVAL;
>>>>> -}
>>>>> -
>>>>> -union vram_info {
>>>>> -	struct _ATOM_VRAM_INFO_V3 v1_3;
>>>>> -	struct _ATOM_VRAM_INFO_V4 v1_4;
>>>>> -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
>>>>> -};
>>>>> -
>>>>> -#define MEM_ID_MASK           0xff000000
>>>>> -#define MEM_ID_SHIFT          24
>>>>> -#define CLOCK_RANGE_MASK      0x00ffffff
>>>>> -#define CLOCK_RANGE_SHIFT     0
>>>>> -#define LOW_NIBBLE_MASK       0xf
>>>>> -#define DATA_EQU_PREV         0
>>>>> -#define DATA_FROM_TABLE       4
>>>>> -
>>>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>>>> -				      u8 module_index,
>>>>> -				      struct atom_mc_reg_table *reg_table)
>>>>> -{
>>>>> -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
>>>>> -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
>>>>> -	u32 i = 0, j;
>>>>> -	u16 data_offset, size;
>>>>> -	union vram_info *vram_info;
>>>>> -
>>>>> -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
>>>>> -
>>>>> -	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> -				   &frev, &crev, &data_offset)) {
>>>>> -		vram_info = (union vram_info *)
>>>>> -			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> -		switch (frev) {
>>>>> -		case 1:
>>>>> -			DRM_ERROR("old table version %d, %d\n", frev,
>>>> crev);
>>>>> -			return -EINVAL;
>>>>> -		case 2:
>>>>> -			switch (crev) {
>>>>> -			case 1:
>>>>> -				if (module_index < vram_info-
>>>>> v2_1.ucNumOfVRAMModule) {
>>>>> -					ATOM_INIT_REG_BLOCK *reg_block
>>>> =
>>>>> -						(ATOM_INIT_REG_BLOCK *)
>>>>> -						((u8 *)vram_info +
>>>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
>>>>> -
>>>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
>>>>> -
>>>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>>>> -						((u8 *)reg_block + (2 *
>>>> sizeof(u16)) +
>>>>> -						 le16_to_cpu(reg_block-
>>>>> usRegIndexTblSize));
>>>>> -					ATOM_INIT_REG_INDEX_FORMAT
>>>> *format = &reg_block->asRegIndexBuf[0];
>>>>> -					num_entries =
>>>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
>>>>> -
>>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
>>>>> -					if (num_entries >
>>>> VBIOS_MC_REGISTER_ARRAY_SIZE)
>>>>> -						return -EINVAL;
>>>>> -					while (i < num_entries) {
>>>>> -						if (format-
>>>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
>>>>> -							break;
>>>>> -						reg_table-
>>>>> mc_reg_address[i].s1 =
>>>>> -
>>>> 	(u16)(le16_to_cpu(format->usRegIndex));
>>>>> -						reg_table-
>>>>> mc_reg_address[i].pre_reg_data =
>>>>> -							(u8)(format-
>>>>> ucPreRegDataLength);
>>>>> -						i++;
>>>>> -						format =
>>>> (ATOM_INIT_REG_INDEX_FORMAT *)
>>>>> -							((u8 *)format +
>>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
>>>>> -					}
>>>>> -					reg_table->last = i;
>>>>> -					while ((le32_to_cpu(*(u32
>>>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
>>>>> -					       (num_ranges <
>>>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
>>>>> -						t_mem_id =
>>>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
>>>>> -								>>
>>>> MEM_ID_SHIFT);
>>>>> -						if (module_index ==
>>>> t_mem_id) {
>>>>> -							reg_table-
>>>>> mc_reg_table_entry[num_ranges].mclk_max =
>>>>> -
>>>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
>>>>> -								      >>
>>>> CLOCK_RANGE_SHIFT);
>>>>> -							for (i = 0, j = 1; i <
>>>> reg_table->last; i++) {
>>>>> -								if ((reg_table-
>>>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>>>> DATA_FROM_TABLE) {
>>>>> -
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>>>> -
>>>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
>>>>> -									j++;
>>>>> -								} else if
>>>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>>>> DATA_EQU_PREV) {
>>>>> -
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>>>> -
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
>>>>> -								}
>>>>> -							}
>>>>> -							num_ranges++;
>>>>> -						}
>>>>> -						reg_data =
>>>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>>>> -							((u8 *)reg_data +
>>>> le16_to_cpu(reg_block->usRegDataBlkSize));
>>>>> -					}
>>>>> -					if (le32_to_cpu(*(u32 *)reg_data) !=
>>>> END_OF_REG_DATA_BLOCK)
>>>>> -						return -EINVAL;
>>>>> -					reg_table->num_entries =
>>>> num_ranges;
>>>>> -				} else
>>>>> -					return -EINVAL;
>>>>> -				break;
>>>>> -			default:
>>>>> -				DRM_ERROR("Unknown table
>>>> version %d, %d\n", frev, crev);
>>>>> -				return -EINVAL;
>>>>> -			}
>>>>> -			break;
>>>>> -		default:
>>>>> -			DRM_ERROR("Unknown table version %d, %d\n",
>>>> frev, crev);
>>>>> -			return -EINVAL;
>>>>> -		}
>>>>> -		return 0;
>>>>> -	}
>>>>> -	return -EINVAL;
>>>>> -}
>>>>> -
>>>>>     bool amdgpu_atombios_has_gpu_virtualization_table(struct
>>>> amdgpu_device *adev)
>>>>>     {
>>>>>     	int index = GetIndexIntoMasterTable(DATA, GPUVirtualizationInfo);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>>>> index 27e74b1fc260..cb5649298dcb 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
>>>>> @@ -160,26 +160,6 @@ int
>> amdgpu_atombios_get_clock_dividers(struct
>>>> amdgpu_device *adev,
>>>>>     				       bool strobe_mode,
>>>>>     				       struct atom_clock_dividers *dividers);
>>>>>
>>>>> -int amdgpu_atombios_get_memory_pll_dividers(struct
>> amdgpu_device
>>>> *adev,
>>>>> -					    u32 clock,
>>>>> -					    bool strobe_mode,
>>>>> -					    struct atom_mpll_param
>>>> *mpll_param);
>>>>> -
>>>>> -void amdgpu_atombios_set_engine_dram_timings(struct
>> amdgpu_device
>>>> *adev,
>>>>> -					     u32 eng_clock, u32 mem_clock);
>>>>> -
>>>>> -bool
>>>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>>>> -				u8 voltage_type, u8 voltage_mode);
>>>>> -
>>>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device *adev,
>>>>> -				      u8 voltage_type, u8 voltage_mode,
>>>>> -				      struct atom_voltage_table
>>>> *voltage_table);
>>>>> -
>>>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev,
>>>>> -				      u8 module_index,
>>>>> -				      struct atom_mc_reg_table *reg_table);
>>>>> -
>>>>>     bool amdgpu_atombios_has_gpu_virtualization_table(struct
>>>> amdgpu_device *adev);
>>>>>
>>>>>     void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device
>> *adev,
>>>> bool lock);
>>>>> @@ -190,21 +170,11 @@ void
>>>> amdgpu_atombios_scratch_regs_set_backlight_level(struct
>> amdgpu_device
>>>> *adev
>>>>>     bool amdgpu_atombios_scratch_need_asic_init(struct amdgpu_device
>>>> *adev);
>>>>>
>>>>>     void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes,
>> bool
>>>> to_le);
>>>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev, u8
>>>> voltage_type,
>>>>> -			     u16 voltage_id, u16 *voltage);
>>>>> -int
>> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>>>> amdgpu_device *adev,
>>>>> -						      u16 *voltage,
>>>>> -						      u16 leakage_idx);
>>>>> -void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>>>> *adev,
>>>>> -					  u16 *vddc, u16 *vddci, u16 *mvdd);
>>>>>     int amdgpu_atombios_get_clock_dividers(struct amdgpu_device
>> *adev,
>>>>>     				       u8 clock_type,
>>>>>     				       u32 clock,
>>>>>     				       bool strobe_mode,
>>>>>     				       struct atom_clock_dividers *dividers);
>>>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>>>> -			      u8 voltage_type,
>>>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
>>>>>
>>>>>     int amdgpu_atombios_get_data_table(struct amdgpu_device *adev,
>>>>>     				   uint32_t table,
>>>>
>>>>
>>>> Whether used in legacy or new logic, atombios table parsing/execution
>>>> should be kept as separate logic. These shouldn't be moved along with
>> dpm.
>>> [Quan, Evan] Are you suggesting another place holder for those atombios
>> APIs? Like legacy_atombios.c?
>>
>> What I meant is no need to move them, keep it in the same file. We also
>> have atomfirmware, splitting this and adding another legacy_atombios is
>> not required.
> [Quan, Evan] Hmm, that seems contrary to Alex' suggestions.
> Although I'm fine with either. I kind of prefer Alex's suggestions.
> That is if they are destined to be dropped(together with SI/KV support), we should get them separated now.
> 

Hmm, that is not the way the code is structured currently. We don't keep 
as atombios_powerplay.c or atomfirmware_smu.c. The logic related to 
atombios is kept in single place. We could mark these as legacy APIs 
such that they get dropped whenever KV/SI support is dropped.

Thanks,
Lijo


> BR
> Evan
>>
>>>>
>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>>>> index 2e295facd086..cdf724dcf832 100644
>>>>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>>>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
>>>>> @@ -404,6 +404,7 @@ struct amd_pm_funcs {
>>>>>     	int (*get_dpm_clock_table)(void *handle,
>>>>>     				   struct dpm_clocks *clock_table);
>>>>>     	int (*get_smu_prv_buf_details)(void *handle, void **addr, size_t
>>>> *size);
>>>>> +	int (*change_power_state)(void *handle);
>>>>>     };
>>>>>
>>>>>     struct metrics_table_header {
>>>>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>>>> index ecaf0081bc31..c6801d10cde6 100644
>>>>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>>>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
>>>>> @@ -34,113 +34,9 @@
>>>>>
>>>>>     #define WIDTH_4K 3840
>>>>>
>>>>> -#define amdgpu_dpm_pre_set_power_state(adev) \
>>>>> -		((adev)->powerplay.pp_funcs-
>>>>> pre_set_power_state((adev)->powerplay.pp_handle))
>>>>> -
>>>>> -#define amdgpu_dpm_post_set_power_state(adev) \
>>>>> -		((adev)->powerplay.pp_funcs-
>>>>> post_set_power_state((adev)->powerplay.pp_handle))
>>>>> -
>>>>> -#define amdgpu_dpm_display_configuration_changed(adev) \
>>>>> -		((adev)->powerplay.pp_funcs-
>>>>> display_configuration_changed((adev)->powerplay.pp_handle))
>>>>> -
>>>>> -#define amdgpu_dpm_print_power_state(adev, ps) \
>>>>> -		((adev)->powerplay.pp_funcs->print_power_state((adev)-
>>>>> powerplay.pp_handle, (ps)))
>>>>> -
>>>>> -#define amdgpu_dpm_vblank_too_short(adev) \
>>>>> -		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
>>>>> powerplay.pp_handle))
>>>>> -
>>>>>     #define amdgpu_dpm_enable_bapm(adev, e) \
>>>>>     		((adev)->powerplay.pp_funcs->enable_bapm((adev)-
>>>>> powerplay.pp_handle, (e)))
>>>>>
>>>>> -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
>>>>> -		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
>>>>> powerplay.pp_handle, (cps), (rps), (equal)))
>>>>> -
>>>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
>>>>> -{
>>>>> -	const char *s;
>>>>> -
>>>>> -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
>>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
>>>>> -	default:
>>>>> -		s = "none";
>>>>> -		break;
>>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
>>>>> -		s = "battery";
>>>>> -		break;
>>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
>>>>> -		s = "balanced";
>>>>> -		break;
>>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
>>>>> -		s = "performance";
>>>>> -		break;
>>>>> -	}
>>>>> -	printk("\tui class: %s\n", s);
>>>>> -	printk("\tinternal class:");
>>>>> -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
>>>>> -	    (class2 == 0))
>>>>> -		pr_cont(" none");
>>>>> -	else {
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
>>>>> -			pr_cont(" boot");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>>>> -			pr_cont(" thermal");
>>>>> -		if (class &
>>>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
>>>>> -			pr_cont(" limited_pwr");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
>>>>> -			pr_cont(" rest");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
>>>>> -			pr_cont(" forced");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>>>> -			pr_cont(" 3d_perf");
>>>>> -		if (class &
>>>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
>>>>> -			pr_cont(" ovrdrv");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
>>>>> -			pr_cont(" uvd");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
>>>>> -			pr_cont(" 3d_low");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>>>> -			pr_cont(" acpi");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>>>> -			pr_cont(" uvd_hd2");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>>>> -			pr_cont(" uvd_hd");
>>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>>>> -			pr_cont(" uvd_sd");
>>>>> -		if (class2 &
>>>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
>>>>> -			pr_cont(" limited_pwr2");
>>>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>>>> -			pr_cont(" ulv");
>>>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
>>>>> -			pr_cont(" uvd_mvc");
>>>>> -	}
>>>>> -	pr_cont("\n");
>>>>> -}
>>>>> -
>>>>> -void amdgpu_dpm_print_cap_info(u32 caps)
>>>>> -{
>>>>> -	printk("\tcaps:");
>>>>> -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
>>>>> -		pr_cont(" single_disp");
>>>>> -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
>>>>> -		pr_cont(" video");
>>>>> -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
>>>>> -		pr_cont(" no_dc");
>>>>> -	pr_cont("\n");
>>>>> -}
>>>>> -
>>>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>>>> -				struct amdgpu_ps *rps)
>>>>> -{
>>>>> -	printk("\tstatus:");
>>>>> -	if (rps == adev->pm.dpm.current_ps)
>>>>> -		pr_cont(" c");
>>>>> -	if (rps == adev->pm.dpm.requested_ps)
>>>>> -		pr_cont(" r");
>>>>> -	if (rps == adev->pm.dpm.boot_ps)
>>>>> -		pr_cont(" b");
>>>>> -	pr_cont("\n");
>>>>> -}
>>>>> -
>>>>>     static void amdgpu_dpm_get_active_displays(struct amdgpu_device
>>>> *adev)
>>>>>     {
>>>>>     	struct drm_device *ddev = adev_to_drm(adev);
>>>>> @@ -161,7 +57,6 @@ static void
>> amdgpu_dpm_get_active_displays(struct
>>>> amdgpu_device *adev)
>>>>>     	}
>>>>>     }
>>>>>
>>>>> -
>>>>>     u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
>>>>>     {
>>>>>     	struct drm_device *dev = adev_to_drm(adev);
>>>>> @@ -209,679 +104,6 @@ static u32 amdgpu_dpm_get_vrefresh(struct
>>>> amdgpu_device *adev)
>>>>>     	return vrefresh;
>>>>>     }
>>>>>
>>>>> -union power_info {
>>>>> -	struct _ATOM_POWERPLAY_INFO info;
>>>>> -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
>>>>> -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
>>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
>>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
>>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
>>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
>>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
>>>>> -};
>>>>> -
>>>>> -union fan_info {
>>>>> -	struct _ATOM_PPLIB_FANTABLE fan;
>>>>> -	struct _ATOM_PPLIB_FANTABLE2 fan2;
>>>>> -	struct _ATOM_PPLIB_FANTABLE3 fan3;
>>>>> -};
>>>>> -
>>>>> -static int amdgpu_parse_clk_voltage_dep_table(struct
>>>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
>>>>> -
>>>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
>>>>> -{
>>>>> -	u32 size = atom_table->ucNumEntries *
>>>>> -		sizeof(struct amdgpu_clock_voltage_dependency_entry);
>>>>> -	int i;
>>>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
>>>>> -
>>>>> -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
>>>>> -	if (!amdgpu_table->entries)
>>>>> -		return -ENOMEM;
>>>>> -
>>>>> -	entry = &atom_table->entries[0];
>>>>> -	for (i = 0; i < atom_table->ucNumEntries; i++) {
>>>>> -		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
>>>>> usClockLow) |
>>>>> -			(entry->ucClockHigh << 16);
>>>>> -		amdgpu_table->entries[i].v = le16_to_cpu(entry-
>>>>> usVoltage);
>>>>> -		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
>>>> *)
>>>>> -			((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
>>>>> -	}
>>>>> -	amdgpu_table->count = atom_table->ucNumEntries;
>>>>> -
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
>>>>> -{
>>>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> -	union power_info *power_info;
>>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> -	u16 data_offset;
>>>>> -	u8 frev, crev;
>>>>> -
>>>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> -				   &frev, &crev, &data_offset))
>>>>> -		return -EINVAL;
>>>>> -	power_info = (union power_info *)(mode_info->atom_context-
>>>>> bios + data_offset);
>>>>> -
>>>>> -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
>>>>> pplib.ulPlatformCaps);
>>>>> -	adev->pm.dpm.backbias_response_time =
>>>> le16_to_cpu(power_info->pplib.usBackbiasTime);
>>>>> -	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
>>>>> pplib.usVoltageTime);
>>>>> -
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
>>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
>>>>> -
>>>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
>> *adev)
>>>>> -{
>>>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> -	union power_info *power_info;
>>>>> -	union fan_info *fan_info;
>>>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
>>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> -	u16 data_offset;
>>>>> -	u8 frev, crev;
>>>>> -	int ret, i;
>>>>> -
>>>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> -				   &frev, &crev, &data_offset))
>>>>> -		return -EINVAL;
>>>>> -	power_info = (union power_info *)(mode_info->atom_context-
>>>>> bios + data_offset);
>>>>> -
>>>>> -	/* fan table */
>>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>>>> -		if (power_info->pplib3.usFanTableOffset) {
>>>>> -			fan_info = (union fan_info *)(mode_info-
>>>>> atom_context->bios + data_offset +
>>>>> -						      le16_to_cpu(power_info-
>>>>> pplib3.usFanTableOffset));
>>>>> -			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
>>>>> -			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
>>>>> fan.usTMin);
>>>>> -			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
>>>>> fan.usTMed);
>>>>> -			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
>>>>> fan.usTHigh);
>>>>> -			adev->pm.dpm.fan.pwm_min =
>>>> le16_to_cpu(fan_info->fan.usPWMMin);
>>>>> -			adev->pm.dpm.fan.pwm_med =
>>>> le16_to_cpu(fan_info->fan.usPWMMed);
>>>>> -			adev->pm.dpm.fan.pwm_high =
>>>> le16_to_cpu(fan_info->fan.usPWMHigh);
>>>>> -			if (fan_info->fan.ucFanTableFormat >= 2)
>>>>> -				adev->pm.dpm.fan.t_max =
>>>> le16_to_cpu(fan_info->fan2.usTMax);
>>>>> -			else
>>>>> -				adev->pm.dpm.fan.t_max = 10900;
>>>>> -			adev->pm.dpm.fan.cycle_delay = 100000;
>>>>> -			if (fan_info->fan.ucFanTableFormat >= 3) {
>>>>> -				adev->pm.dpm.fan.control_mode =
>>>> fan_info->fan3.ucFanControlMode;
>>>>> -				adev->pm.dpm.fan.default_max_fan_pwm
>>>> =
>>>>> -					le16_to_cpu(fan_info-
>>>>> fan3.usFanPWMMax);
>>>>> -				adev-
>>>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
>>>>> -				adev->pm.dpm.fan.fan_output_sensitivity =
>>>>> -					le16_to_cpu(fan_info-
>>>>> fan3.usFanOutputSensitivity);
>>>>> -			}
>>>>> -			adev->pm.dpm.fan.ucode_fan_control = true;
>>>>> -		}
>>>>> -	}
>>>>> -
>>>>> -	/* clock dependancy tables, shedding tables */
>>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
>>>>> -		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
>>>>> -			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcDependencyOnSCLKOffset));
>>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
>>>>> -								 dep_table);
>>>>> -			if (ret) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return ret;
>>>>> -			}
>>>>> -		}
>>>>> -		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
>>>>> -			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usVddciDependencyOnMCLKOffset));
>>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
>>>>> -								 dep_table);
>>>>> -			if (ret) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return ret;
>>>>> -			}
>>>>> -		}
>>>>> -		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
>>>>> -			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcDependencyOnMCLKOffset));
>>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
>>>>> -								 dep_table);
>>>>> -			if (ret) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return ret;
>>>>> -			}
>>>>> -		}
>>>>> -		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
>>>> {
>>>>> -			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usMvddDependencyOnMCLKOffset));
>>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
>>>>> -								 dep_table);
>>>>> -			if (ret) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return ret;
>>>>> -			}
>>>>> -		}
>>>>> -		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
>>>>> -			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
>>>>> -				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usMaxClockVoltageOnDCOffset));
>>>>> -			if (clk_v->ucNumEntries) {
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
>>>>> -					le16_to_cpu(clk_v-
>>>>> entries[0].usSclkLow) |
>>>>> -					(clk_v->entries[0].ucSclkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
>>>>> -					le16_to_cpu(clk_v-
>>>>> entries[0].usMclkLow) |
>>>>> -					(clk_v->entries[0].ucMclkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
>>>>> -					le16_to_cpu(clk_v-
>>>>> entries[0].usVddc);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
>>>>> -					le16_to_cpu(clk_v-
>>>>> entries[0].usVddci);
>>>>> -			}
>>>>> -		}
>>>>> -		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
>>>> {
>>>>> -			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
>>>>> -				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcPhaseShedLimitsTableOffset));
>>>>> -			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
>>>>> -
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
>>>>> -				kcalloc(psl->ucNumEntries,
>>>>> -					sizeof(struct
>>>> amdgpu_phase_shedding_limits_entry),
>>>>> -					GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -
>>>>> -			entry = &psl->entries[0];
>>>>> -			for (i = 0; i < psl->ucNumEntries; i++) {
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
>>>>> -					le16_to_cpu(entry->usSclkLow) |
>>>> (entry->ucSclkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
>>>>> -					le16_to_cpu(entry->usMclkLow) |
>>>> (entry->ucMclkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
>>>>> -					le16_to_cpu(entry->usVoltage);
>>>>> -				entry =
>>>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
>>>>> -			}
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
>>>>> -				psl->ucNumEntries;
>>>>> -		}
>>>>> -	}
>>>>> -
>>>>> -	/* cac data */
>>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
>>>>> -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
>>>>> pplib5.ulTDPLimit);
>>>>> -		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
>>>>> pplib5.ulNearTDPLimit);
>>>>> -		adev->pm.dpm.near_tdp_limit_adjusted = adev-
>>>>> pm.dpm.near_tdp_limit;
>>>>> -		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
>>>>> pplib5.usTDPODLimit);
>>>>> -		if (adev->pm.dpm.tdp_od_limit)
>>>>> -			adev->pm.dpm.power_control = true;
>>>>> -		else
>>>>> -			adev->pm.dpm.power_control = false;
>>>>> -		adev->pm.dpm.tdp_adjustment = 0;
>>>>> -		adev->pm.dpm.sq_ramping_threshold =
>>>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
>>>>> -		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
>>>>> pplib5.ulCACLeakage);
>>>>> -		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
>>>>> pplib5.usLoadLineSlope);
>>>>> -		if (power_info->pplib5.usCACLeakageTableOffset) {
>>>>> -			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
>>>>> -				(ATOM_PPLIB_CAC_Leakage_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(power_info-
>>>>> pplib5.usCACLeakageTableOffset));
>>>>> -			ATOM_PPLIB_CAC_Leakage_Record *entry;
>>>>> -			u32 size = cac_table->ucNumEntries * sizeof(struct
>>>> amdgpu_cac_leakage_table);
>>>>> -			adev->pm.dpm.dyn_state.cac_leakage_table.entries
>>>> = kzalloc(size, GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			entry = &cac_table->entries[0];
>>>>> -			for (i = 0; i < cac_table->ucNumEntries; i++) {
>>>>> -				if (adev->pm.dpm.platform_caps &
>>>> ATOM_PP_PLATFORM_CAP_EVV) {
>>>>> -					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
>>>>> -						le16_to_cpu(entry-
>>>>> usVddc1);
>>>>> -					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
>>>>> -						le16_to_cpu(entry-
>>>>> usVddc2);
>>>>> -					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
>>>>> -						le16_to_cpu(entry-
>>>>> usVddc3);
>>>>> -				} else {
>>>>> -					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
>>>>> -						le16_to_cpu(entry->usVddc);
>>>>> -					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
>>>>> -						le32_to_cpu(entry-
>>>>> ulLeakageValue);
>>>>> -				}
>>>>> -				entry = (ATOM_PPLIB_CAC_Leakage_Record
>>>> *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
>>>>> -			}
>>>>> -			adev->pm.dpm.dyn_state.cac_leakage_table.count
>>>> = cac_table->ucNumEntries;
>>>>> -		}
>>>>> -	}
>>>>> -
>>>>> -	/* ext tables */
>>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>>>> -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
>>>> (ATOM_PPLIB_EXTENDEDHEADER *)
>>>>> -			(mode_info->atom_context->bios + data_offset +
>>>>> -			 le16_to_cpu(power_info-
>>>>> pplib3.usExtendendedHeaderOffset));
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
>>>>> -			ext_hdr->usVCETableOffset) {
>>>>> -			VCEClockInfoArray *array = (VCEClockInfoArray *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1);
>>>>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
>>>> *limits =
>>>>> -
>>>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1 +
>>>>> -				 1 + array->ucNumEntries *
>>>> sizeof(VCEClockInfo));
>>>>> -			ATOM_PPLIB_VCE_State_Table *states =
>>>>> -				(ATOM_PPLIB_VCE_State_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1 +
>>>>> -				 1 + (array->ucNumEntries * sizeof
>>>> (VCEClockInfo)) +
>>>>> -				 1 + (limits->numEntries *
>>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
>>>>> -			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
>>>> *entry;
>>>>> -			ATOM_PPLIB_VCE_State_Record *state_entry;
>>>>> -			VCEClockInfo *vce_clk;
>>>>> -			u32 size = limits->numEntries *
>>>>> -				sizeof(struct
>>>> amdgpu_vce_clock_voltage_dependency_entry);
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
>>>>> -				kzalloc(size, GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
>>>>> -				limits->numEntries;
>>>>> -			entry = &limits->entries[0];
>>>>> -			state_entry = &states->entries[0];
>>>>> -			for (i = 0; i < limits->numEntries; i++) {
>>>>> -				vce_clk = (VCEClockInfo *)
>>>>> -					((u8 *)&array->entries[0] +
>>>>> -					 (entry->ucVCEClockInfoIndex *
>>>> sizeof(VCEClockInfo)));
>>>>> -				adev-
>>>>>
>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
>>>> =
>>>>> -					le16_to_cpu(vce_clk->usEVClkLow) |
>>>> (vce_clk->ucEVClkHigh << 16);
>>>>> -				adev-
>>>>>
>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
>>>> =
>>>>> -					le16_to_cpu(vce_clk->usECClkLow) |
>>>> (vce_clk->ucECClkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
>>>>> -					le16_to_cpu(entry->usVoltage);
>>>>> -				entry =
>>>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
>>>>> -			}
>>>>> -			adev->pm.dpm.num_of_vce_states =
>>>>> -					states->numEntries >
>>>> AMD_MAX_VCE_LEVELS ?
>>>>> -					AMD_MAX_VCE_LEVELS : states-
>>>>> numEntries;
>>>>> -			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
>>>> {
>>>>> -				vce_clk = (VCEClockInfo *)
>>>>> -					((u8 *)&array->entries[0] +
>>>>> -					 (state_entry->ucVCEClockInfoIndex
>>>> * sizeof(VCEClockInfo)));
>>>>> -				adev->pm.dpm.vce_states[i].evclk =
>>>>> -					le16_to_cpu(vce_clk->usEVClkLow) |
>>>> (vce_clk->ucEVClkHigh << 16);
>>>>> -				adev->pm.dpm.vce_states[i].ecclk =
>>>>> -					le16_to_cpu(vce_clk->usECClkLow) |
>>>> (vce_clk->ucECClkHigh << 16);
>>>>> -				adev->pm.dpm.vce_states[i].clk_idx =
>>>>> -					state_entry->ucClockInfoIndex &
>>>> 0x3f;
>>>>> -				adev->pm.dpm.vce_states[i].pstate =
>>>>> -					(state_entry->ucClockInfoIndex &
>>>> 0xc0) >> 6;
>>>>> -				state_entry =
>>>> (ATOM_PPLIB_VCE_State_Record *)
>>>>> -					((u8 *)state_entry +
>>>> sizeof(ATOM_PPLIB_VCE_State_Record));
>>>>> -			}
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
>>>>> -			ext_hdr->usUVDTableOffset) {
>>>>> -			UVDClockInfoArray *array = (UVDClockInfoArray *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>>>> 1);
>>>>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
>>>> *limits =
>>>>> -
>>>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>>>> 1 +
>>>>> -				 1 + (array->ucNumEntries * sizeof
>>>> (UVDClockInfo)));
>>>>> -			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
>>>> *entry;
>>>>> -			u32 size = limits->numEntries *
>>>>> -				sizeof(struct
>>>> amdgpu_uvd_clock_voltage_dependency_entry);
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
>>>>> -				kzalloc(size, GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
>>>>> -				limits->numEntries;
>>>>> -			entry = &limits->entries[0];
>>>>> -			for (i = 0; i < limits->numEntries; i++) {
>>>>> -				UVDClockInfo *uvd_clk = (UVDClockInfo *)
>>>>> -					((u8 *)&array->entries[0] +
>>>>> -					 (entry->ucUVDClockInfoIndex *
>>>> sizeof(UVDClockInfo)));
>>>>> -				adev-
>>>>>
>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
>>>>> -					le16_to_cpu(uvd_clk->usVClkLow) |
>>>> (uvd_clk->ucVClkHigh << 16);
>>>>> -				adev-
>>>>>
>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
>>>>> -					le16_to_cpu(uvd_clk->usDClkLow) |
>>>> (uvd_clk->ucDClkHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
>>>>> -					le16_to_cpu(entry->usVoltage);
>>>>> -				entry =
>>>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
>>>>> -			}
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
>>>>> -			ext_hdr->usSAMUTableOffset) {
>>>>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
>>>>> -				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
>>>> *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
>>>> + 1);
>>>>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
>>>>> -			u32 size = limits->numEntries *
>>>>> -				sizeof(struct
>>>> amdgpu_clock_voltage_dependency_entry);
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
>>>>> -				kzalloc(size, GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
>>>>> -				limits->numEntries;
>>>>> -			entry = &limits->entries[0];
>>>>> -			for (i = 0; i < limits->numEntries; i++) {
>>>>> -				adev-
>>>>>
>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
>>>>> -					le16_to_cpu(entry->usSAMClockLow)
>>>> | (entry->ucSAMClockHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
>> =
>>>>> -					le16_to_cpu(entry->usVoltage);
>>>>> -				entry =
>>>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
>>>>> -			}
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
>>>>> -		    ext_hdr->usPPMTableOffset) {
>>>>> -			ATOM_PPLIB_PPM_Table *ppm =
>>>> (ATOM_PPLIB_PPM_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usPPMTableOffset));
>>>>> -			adev->pm.dpm.dyn_state.ppm_table =
>>>>> -				kzalloc(sizeof(struct amdgpu_ppm_table),
>>>> GFP_KERNEL);
>>>>> -			if (!adev->pm.dpm.dyn_state.ppm_table) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			adev->pm.dpm.dyn_state.ppm_table->ppm_design
>>>> = ppm->ucPpmDesign;
>>>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>>>> cpu_core_number =
>>>>> -				le16_to_cpu(ppm->usCpuCoreNumber);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>>>> platform_tdp =
>>>>> -				le32_to_cpu(ppm->ulPlatformTDP);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>>>> small_ac_platform_tdp =
>>>>> -				le32_to_cpu(ppm->ulSmallACPlatformTDP);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
>>>> =
>>>>> -				le32_to_cpu(ppm->ulPlatformTDC);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>>>> small_ac_platform_tdc =
>>>>> -				le32_to_cpu(ppm->ulSmallACPlatformTDC);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
>>>>> -				le32_to_cpu(ppm->ulApuTDP);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
>>>>> -				le32_to_cpu(ppm->ulDGpuTDP);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table-
>>>>> dgpu_ulv_power =
>>>>> -				le32_to_cpu(ppm->ulDGpuUlvPower);
>>>>> -			adev->pm.dpm.dyn_state.ppm_table->tj_max =
>>>>> -				le32_to_cpu(ppm->ulTjmax);
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
>>>>> -			ext_hdr->usACPTableOffset) {
>>>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
>>>>> -				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
>>>> *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr->usACPTableOffset) +
>>>> 1);
>>>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
>>>>> -			u32 size = limits->numEntries *
>>>>> -				sizeof(struct
>>>> amdgpu_clock_voltage_dependency_entry);
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
>>>>> -				kzalloc(size, GFP_KERNEL);
>>>>> -			if (!adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
>>>>> -				limits->numEntries;
>>>>> -			entry = &limits->entries[0];
>>>>> -			for (i = 0; i < limits->numEntries; i++) {
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
>> =
>>>>> -					le16_to_cpu(entry->usACPClockLow)
>>>> | (entry->ucACPClockHigh << 16);
>>>>> -				adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
>>>>> -					le16_to_cpu(entry->usVoltage);
>>>>> -				entry =
>>>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
>>>>> -					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
>>>>> -			}
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
>>>>> -			ext_hdr->usPowerTuneTableOffset) {
>>>>> -			u8 rev = *(u8 *)(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> -			ATOM_PowerTune_Table *pt;
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table =
>>>>> -				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
>>>> GFP_KERNEL);
>>>>> -			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
>>>>> -
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> -				return -ENOMEM;
>>>>> -			}
>>>>> -			if (rev > 0) {
>>>>> -				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
>>>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
>>>>> -					(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> maximum_power_delivery_limit =
>>>>> -					ppt->usMaximumPowerDeliveryLimit;
>>>>> -				pt = &ppt->power_tune_table;
>>>>> -			} else {
>>>>> -				ATOM_PPLIB_POWERTUNE_Table *ppt =
>>>> (ATOM_PPLIB_POWERTUNE_Table *)
>>>>> -					(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> -				adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> maximum_power_delivery_limit = 255;
>>>>> -				pt = &ppt->power_tune_table;
>>>>> -			}
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
>>>> le16_to_cpu(pt->usTDP);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> configurable_tdp =
>>>>> -				le16_to_cpu(pt->usConfigurableTDP);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
>>>> le16_to_cpu(pt->usTDC);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> battery_power_limit =
>>>>> -				le16_to_cpu(pt->usBatteryPowerLimit);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> small_power_limit =
>>>>> -				le16_to_cpu(pt->usSmallPowerLimit);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> low_cac_leakage =
>>>>> -				le16_to_cpu(pt->usLowCACLeakage);
>>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> high_cac_leakage =
>>>>> -				le16_to_cpu(pt->usHighCACLeakage);
>>>>> -		}
>>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
>>>>> -				ext_hdr->usSclkVddgfxTableOffset) {
>>>>> -			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> -				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> -				 le16_to_cpu(ext_hdr-
>>>>> usSclkVddgfxTableOffset));
>>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(
>>>>> -					&adev-
>>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
>>>>> -					dep_table);
>>>>> -			if (ret) {
>>>>> -				kfree(adev-
>>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
>>>>> -				return ret;
>>>>> -			}
>>>>> -		}
>>>>> -	}
>>>>> -
>>>>> -	return 0;
>>>>> -}
>>>>> -
>>>>> -void amdgpu_free_extended_power_table(struct amdgpu_device
>> *adev)
>>>>> -{
>>>>> -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
>>>>> pm.dpm.dyn_state;
>>>>> -
>>>>> -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
>>>>> -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
>>>>> -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
>>>>> -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
>>>>> -	kfree(dyn_state->cac_leakage_table.entries);
>>>>> -	kfree(dyn_state->phase_shedding_limits_table.entries);
>>>>> -	kfree(dyn_state->ppm_table);
>>>>> -	kfree(dyn_state->cac_tdp_table);
>>>>> -	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
>>>>> -	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
>>>>> -	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
>>>>> -	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
>>>>> -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
>>>>> -}
>>>>> -
>>>>> -static const char *pp_lib_thermal_controller_names[] = {
>>>>> -	"NONE",
>>>>> -	"lm63",
>>>>> -	"adm1032",
>>>>> -	"adm1030",
>>>>> -	"max6649",
>>>>> -	"lm64",
>>>>> -	"f75375",
>>>>> -	"RV6xx",
>>>>> -	"RV770",
>>>>> -	"adt7473",
>>>>> -	"NONE",
>>>>> -	"External GPIO",
>>>>> -	"Evergreen",
>>>>> -	"emc2103",
>>>>> -	"Sumo",
>>>>> -	"Northern Islands",
>>>>> -	"Southern Islands",
>>>>> -	"lm96163",
>>>>> -	"Sea Islands",
>>>>> -	"Kaveri/Kabini",
>>>>> -};
>>>>> -
>>>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
>>>>> -{
>>>>> -	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
>>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> -	ATOM_PPLIB_THERMALCONTROLLER *controller;
>>>>> -	struct amdgpu_i2c_bus_rec i2c_bus;
>>>>> -	u16 data_offset;
>>>>> -	u8 frev, crev;
>>>>> -
>>>>> -	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> -				   &frev, &crev, &data_offset))
>>>>> -		return;
>>>>> -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
>>>>> -		(mode_info->atom_context->bios + data_offset);
>>>>> -	controller = &power_table->sThermalController;
>>>>> -
>>>>> -	/* add the i2c bus for thermal/fan chip */
>>>>> -	if (controller->ucType > 0) {
>>>>> -		if (controller->ucFanParameters &
>>>> ATOM_PP_FANPARAMETERS_NOFAN)
>>>>> -			adev->pm.no_fan = true;
>>>>> -		adev->pm.fan_pulses_per_revolution =
>>>>> -			controller->ucFanParameters &
>>>>
>> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
>>>> ASK;
>>>>> -		if (adev->pm.fan_pulses_per_revolution) {
>>>>> -			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
>>>>> -			adev->pm.fan_max_rpm = controller-
>>>>> ucFanMaxRPM;
>>>>> -		}
>>>>> -		if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_RV6XX;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_RV770) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_RV770;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EVERGREEN;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_SUMO) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_SUMO;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
>>>>> -			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
>>>>> -		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
>>>>> -			DRM_INFO("External GPIO thermal controller %s fan
>>>> control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EXTERNAL_GPIO;
>>>>> -		} else if (controller->ucType ==
>>>>> -
>>>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
>>>>> -			DRM_INFO("ADT7473 with internal thermal
>>>> controller %s fan control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
>>>>> -		} else if (controller->ucType ==
>>>>> -
>>>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
>>>>> -			DRM_INFO("EMC2103 with internal thermal
>>>> controller %s fan control\n",
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
>>>>> -		} else if (controller->ucType <
>>>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
>>>>> -			DRM_INFO("Possible %s thermal controller at
>>>> 0x%02x %s fan control\n",
>>>>> -
>>>> pp_lib_thermal_controller_names[controller->ucType],
>>>>> -				 controller->ucI2cAddress >> 1,
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EXTERNAL;
>>>>> -			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
>>>> controller->ucI2cLine);
>>>>> -			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
>>>> &i2c_bus);
>>>>> -			if (adev->pm.i2c_bus) {
>>>>> -				struct i2c_board_info info = { };
>>>>> -				const char *name =
>>>> pp_lib_thermal_controller_names[controller->ucType];
>>>>> -				info.addr = controller->ucI2cAddress >> 1;
>>>>> -				strlcpy(info.type, name, sizeof(info.type));
>>>>> -				i2c_new_client_device(&adev->pm.i2c_bus-
>>>>> adapter, &info);
>>>>> -			}
>>>>> -		} else {
>>>>> -			DRM_INFO("Unknown thermal controller type %d at
>>>> 0x%02x %s fan control\n",
>>>>> -				 controller->ucType,
>>>>> -				 controller->ucI2cAddress >> 1,
>>>>> -				 (controller->ucFanParameters &
>>>>> -				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> -		}
>>>>> -	}
>>>>> -}
>>>>> -
>>>>> -struct amd_vce_state*
>>>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx)
>>>>> -{
>>>>> -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>>>> -
>>>>> -	if (idx < adev->pm.dpm.num_of_vce_states)
>>>>> -		return &adev->pm.dpm.vce_states[idx];
>>>>> -
>>>>> -	return NULL;
>>>>> -}
>>>>> -
>>>>>     int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
>>>>>     {
>>>>>     	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
>>>>> @@ -1243,211 +465,6 @@ void
>>>> amdgpu_dpm_thermal_work_handler(struct work_struct *work)
>>>>>     	amdgpu_pm_compute_clocks(adev);
>>>>>     }
>>>>>
>>>>> -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
>>>> amdgpu_device *adev,
>>>>> -						     enum
>>>> amd_pm_state_type dpm_state)
>>>>> -{
>>>>> -	int i;
>>>>> -	struct amdgpu_ps *ps;
>>>>> -	u32 ui_class;
>>>>> -	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
>>>>> -		true : false;
>>>>> -
>>>>> -	/* check if the vblank period is too short to adjust the mclk */
>>>>> -	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
>>>> {
>>>>> -		if (amdgpu_dpm_vblank_too_short(adev))
>>>>> -			single_display = false;
>>>>> -	}
>>>>> -
>>>>> -	/* certain older asics have a separare 3D performance state,
>>>>> -	 * so try that first if the user selected performance
>>>>> -	 */
>>>>> -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
>>>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
>>>>> -	/* balanced states don't exist at the moment */
>>>>> -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
>>>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> -
>>>>> -restart_search:
>>>>> -	/* Pick the best power state based on current conditions */
>>>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
>>>>> -		ps = &adev->pm.dpm.ps[i];
>>>>> -		ui_class = ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
>>>>> -		switch (dpm_state) {
>>>>> -		/* user states */
>>>>> -		case POWER_STATE_TYPE_BATTERY:
>>>>> -			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
>>>>> -				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> -					if (single_display)
>>>>> -						return ps;
>>>>> -				} else
>>>>> -					return ps;
>>>>> -			}
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_BALANCED:
>>>>> -			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
>>>>> -				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> -					if (single_display)
>>>>> -						return ps;
>>>>> -				} else
>>>>> -					return ps;
>>>>> -			}
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_PERFORMANCE:
>>>>> -			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
>>>>> -				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> -					if (single_display)
>>>>> -						return ps;
>>>>> -				} else
>>>>> -					return ps;
>>>>> -			}
>>>>> -			break;
>>>>> -		/* internal states */
>>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD:
>>>>> -			if (adev->pm.dpm.uvd_ps)
>>>>> -				return adev->pm.dpm.uvd_ps;
>>>>> -			else
>>>>> -				break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>>>> -			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>>>> -			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>>>> -			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>>>> -			if (ps->class2 &
>>>> ATOM_PPLIB_CLASSIFICATION2_MVC)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_BOOT:
>>>>> -			return adev->pm.dpm.boot_ps;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>>>> -			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_ACPI:
>>>>> -			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_ULV:
>>>>> -			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>>>> -			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>>>> -				return ps;
>>>>> -			break;
>>>>> -		default:
>>>>> -			break;
>>>>> -		}
>>>>> -	}
>>>>> -	/* use a fallback state if we didn't match */
>>>>> -	switch (dpm_state) {
>>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
>>>>> -		goto restart_search;
>>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>>>> -		if (adev->pm.dpm.uvd_ps) {
>>>>> -			return adev->pm.dpm.uvd_ps;
>>>>> -		} else {
>>>>> -			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> -			goto restart_search;
>>>>> -		}
>>>>> -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
>>>>> -		goto restart_search;
>>>>> -	case POWER_STATE_TYPE_INTERNAL_ACPI:
>>>>> -		dpm_state = POWER_STATE_TYPE_BATTERY;
>>>>> -		goto restart_search;
>>>>> -	case POWER_STATE_TYPE_BATTERY:
>>>>> -	case POWER_STATE_TYPE_BALANCED:
>>>>> -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> -		goto restart_search;
>>>>> -	default:
>>>>> -		break;
>>>>> -	}
>>>>> -
>>>>> -	return NULL;
>>>>> -}
>>>>> -
>>>>> -static void amdgpu_dpm_change_power_state_locked(struct
>>>> amdgpu_device *adev)
>>>>> -{
>>>>> -	struct amdgpu_ps *ps;
>>>>> -	enum amd_pm_state_type dpm_state;
>>>>> -	int ret;
>>>>> -	bool equal = false;
>>>>> -
>>>>> -	/* if dpm init failed */
>>>>> -	if (!adev->pm.dpm_enabled)
>>>>> -		return;
>>>>> -
>>>>> -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
>>>>> -		/* add other state override checks here */
>>>>> -		if ((!adev->pm.dpm.thermal_active) &&
>>>>> -		    (!adev->pm.dpm.uvd_active))
>>>>> -			adev->pm.dpm.state = adev->pm.dpm.user_state;
>>>>> -	}
>>>>> -	dpm_state = adev->pm.dpm.state;
>>>>> -
>>>>> -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
>>>>> -	if (ps)
>>>>> -		adev->pm.dpm.requested_ps = ps;
>>>>> -	else
>>>>> -		return;
>>>>> -
>>>>> -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
>>>>> print_power_state) {
>>>>> -		printk("switching from power state:\n");
>>>>> -		amdgpu_dpm_print_power_state(adev, adev-
>>>>> pm.dpm.current_ps);
>>>>> -		printk("switching to power state:\n");
>>>>> -		amdgpu_dpm_print_power_state(adev, adev-
>>>>> pm.dpm.requested_ps);
>>>>> -	}
>>>>> -
>>>>> -	/* update whether vce is active */
>>>>> -	ps->vce_active = adev->pm.dpm.vce_active;
>>>>> -	if (adev->powerplay.pp_funcs->display_configuration_changed)
>>>>> -		amdgpu_dpm_display_configuration_changed(adev);
>>>>> -
>>>>> -	ret = amdgpu_dpm_pre_set_power_state(adev);
>>>>> -	if (ret)
>>>>> -		return;
>>>>> -
>>>>> -	if (adev->powerplay.pp_funcs->check_state_equal) {
>>>>> -		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
>>>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
>>>>> -			equal = false;
>>>>> -	}
>>>>> -
>>>>> -	if (equal)
>>>>> -		return;
>>>>> -
>>>>> -	if (adev->powerplay.pp_funcs->set_power_state)
>>>>> -		adev->powerplay.pp_funcs->set_power_state(adev-
>>>>> powerplay.pp_handle);
>>>>> -
>>>>> -	amdgpu_dpm_post_set_power_state(adev);
>>>>> -
>>>>> -	adev->pm.dpm.current_active_crtcs = adev-
>>>>> pm.dpm.new_active_crtcs;
>>>>> -	adev->pm.dpm.current_active_crtc_count = adev-
>>>>> pm.dpm.new_active_crtc_count;
>>>>> -
>>>>> -	if (adev->powerplay.pp_funcs->force_performance_level) {
>>>>> -		if (adev->pm.dpm.thermal_active) {
>>>>> -			enum amd_dpm_forced_level level = adev-
>>>>> pm.dpm.forced_level;
>>>>> -			/* force low perf level for thermal */
>>>>> -			amdgpu_dpm_force_performance_level(adev,
>>>> AMD_DPM_FORCED_LEVEL_LOW);
>>>>> -			/* save the user's level */
>>>>> -			adev->pm.dpm.forced_level = level;
>>>>> -		} else {
>>>>> -			/* otherwise, user selected level */
>>>>> -			amdgpu_dpm_force_performance_level(adev,
>>>> adev->pm.dpm.forced_level);
>>>>> -		}
>>>>> -	}
>>>>> -}
>>>>> -
>>>>>     void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
>>>>>     {
>>>>
>>>> Rename to amdgpu_dpm_compute_clocks?
>>> [Quan, Evan] Sure, I can do that.
>>>>
>>>>>     	int i = 0;
>>>>> @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct
>>>> amdgpu_device *adev)
>>>>>     			amdgpu_fence_wait_empty(ring);
>>>>>     	}
>>>>>
>>>>> -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
>>>>> +	if ((adev->family == AMDGPU_FAMILY_SI) ||
>>>>> +	     (adev->family == AMDGPU_FAMILY_KV)) {
>>>>> +		amdgpu_dpm_get_active_displays(adev);
>>>>> +		adev->powerplay.pp_funcs->change_power_state(adev-
>>>>> powerplay.pp_handle);
>>>>
>>>> It would be clearer if the newly added logic in this function is in
>>>> another patch. This does more than what the patch subject says.
>>> [Quan, Evan] Actually there are no new logic added. These are for "!adev-
>>> powerplay.pp_funcs->dispatch_tasks".
>>> Considering there are actually only SI and KV which do not have -
>>> dispatch_tasks() implemented.
>>> So, I used "((adev->family == AMDGPU_FAMILY_SI) ||(adev->family ==
>> AMDGPU_FAMILY_KV))" here.
>>> Maybe i should stick with "!adev->powerplay.pp_funcs->dispatch_tasks"?
>>
>> This change also adds a new callback change_power_state(). I interpreted
>> it as something different from what the patch subject says.
>>
>>>>
>>>>> +	} else {
>>>>>     		if (!amdgpu_device_has_dc_support(adev)) {
>>>>> -			mutex_lock(&adev->pm.mutex);
>>>>>     			amdgpu_dpm_get_active_displays(adev);
>>>>>     			adev->pm.pm_display_cfg.num_display = adev-
>>>>> pm.dpm.new_active_crtc_count;
>>>>>     			adev->pm.pm_display_cfg.vrefresh =
>>>> amdgpu_dpm_get_vrefresh(adev);
>>>>> @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct
>>>> amdgpu_device *adev)
>>>>>     				adev->powerplay.pp_funcs-
>>>>> display_configuration_change(
>>>>>     							adev-
>>>>> powerplay.pp_handle,
>>>>>     							&adev-
>>>>> pm.pm_display_cfg);
>>>>> -			mutex_unlock(&adev->pm.mutex);
>>>>>     		}
>>>>>     		amdgpu_dpm_dispatch_task(adev,
>>>> AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
>>>>> -	} else {
>>>>> -		mutex_lock(&adev->pm.mutex);
>>>>> -		amdgpu_dpm_get_active_displays(adev);
>>>>> -		amdgpu_dpm_change_power_state_locked(adev);
>>>>> -		mutex_unlock(&adev->pm.mutex);
>>>>>     	}
>>>>>     }
>>>>>
>>>>> @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct
>>>> amdgpu_device *adev, bool enable)
>>>>>     	}
>>>>>     }
>>>>>
>>>>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
>>>>> -{
>>>>> -	int i;
>>>>> -
>>>>> -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
>>>>> -		return;
>>>>> -
>>>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
>>>>> -		amdgpu_dpm_print_power_state(adev, &adev-
>>>>> pm.dpm.ps[i]);
>>>>> -
>>>>> -}
>>>>> -
>>>>>     void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
>>>> enable)
>>>>>     {
>>>>>     	int ret = 0;
>>>>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>>>> index 01120b302590..295d2902aef7 100644
>>>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
>>>>> @@ -366,24 +366,10 @@ enum amdgpu_display_gap
>>>>>         AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
>>>>>     };
>>>>>
>>>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
>>>>> -void amdgpu_dpm_print_cap_info(u32 caps);
>>>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>>>> -				struct amdgpu_ps *rps);
>>>>>     u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
>>>>>     int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
>>>> amd_pp_sensors sensor,
>>>>>     			   void *data, uint32_t *size);
>>>>>
>>>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
>>>>> -
>>>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
>> *adev);
>>>>> -void amdgpu_free_extended_power_table(struct amdgpu_device
>>>> *adev);
>>>>> -
>>>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
>>>>> -
>>>>> -struct amd_vce_state*
>>>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx);
>>>>> -
>>>>>     int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device
>>>> *adev,
>>>>>     				      uint32_t block_type, bool gate);
>>>>>
>>>>> @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct
>>>> amdgpu_device *adev);
>>>>>     void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool
>>>> enable);
>>>>>     void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
>>>> enable);
>>>>>     void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool
>>>> enable);
>>>>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>>>>     int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
>>>> uint32_t *smu_version);
>>>>>     int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
>>>> enable);
>>>>>     int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device
>>>> *adev, uint32_t size);
>>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>>> b/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>>>> index 0fb114adc79f..614d8b6a58ad 100644
>>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
>>>>> @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix
>>>> /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
>>>>>
>>>>>     include $(AMD_POWERPLAY)
>>>>>
>>>>> -POWER_MGR-y = amd_powerplay.o
>>>>> +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
>>>>>
>>>>>     POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o kv_smc.o
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>>> b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>>>> index 380a5336c74f..90f4c65659e2 100644
>>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
>>>>> @@ -36,6 +36,7 @@
>>>>>
>>>>>     #include "gca/gfx_7_2_d.h"
>>>>>     #include "gca/gfx_7_2_sh_mask.h"
>>>>> +#include "legacy_dpm.h"
>>>>>
>>>>>     #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
>>>>>     #define KV_MINIMUM_ENGINE_CLOCK         800
>>>>> @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs
>> kv_dpm_funcs
>>>> = {
>>>>>     	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>>>>>     	.check_state_equal = kv_check_state_equal,
>>>>>     	.read_sensor = &kv_dpm_read_sensor,
>>>>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>>>>>     };
>>>>>
>>>>>     static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
>>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>>>
>>>> This could get confused with all APIs that support legacy dpms. This
>>>> file has only a subset of APIs to support legacy dpm. Needs a better
>>>> name - powerplay_ctrl/powerplay_util ?
>>> [Quan, Evan] The "legacy_dpm" refers for those logics used only by
>> si/kv(si_dpm.c, kv_dpm.c).
>>> Considering these logics are not used at default(radeon driver instead of
>> amdgpu driver is used to support those legacy ASICs at default).
>>> We might drop support for them from our amdgpu driver. So, I gather all
>> those APIs and put them in a new holder.
>>> Maybe you wrongly treat it as a new holder for powerplay APIs(used by
>> VI/AI)?
>>
>> As it got moved under powerplay, I thought they were also used in AI/VI
>> powerplay. Otherwise, move si/kv along with this out of powerplay and
>> keep them separate.
>>
>> Thanks,
>> Lijo
>>
>>>
>>> BR
>>> Evan
>>>>
>>>> Thanks,
>>>> Lijo
>>>>
>>>>> new file mode 100644
>>>>> index 000000000000..9427c1026e1d
>>>>> --- /dev/null
>>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
>>>>> @@ -0,0 +1,1453 @@
>>>>> +/*
>>>>> + * Copyright 2021 Advanced Micro Devices, Inc.
>>>>> + *
>>>>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>>>>> + * copy of this software and associated documentation files (the
>>>> "Software"),
>>>>> + * to deal in the Software without restriction, including without
>> limitation
>>>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>>>> + * and/or sell copies of the Software, and to permit persons to whom
>> the
>>>>> + * Software is furnished to do so, subject to the following conditions:
>>>>> + *
>>>>> + * The above copyright notice and this permission notice shall be
>> included
>>>> in
>>>>> + * all copies or substantial portions of the Software.
>>>>> + *
>>>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>>>> KIND, EXPRESS OR
>>>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>>> MERCHANTABILITY,
>>>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>>>> NO EVENT SHALL
>>>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>>>> DAMAGES OR
>>>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>>>> OTHERWISE,
>>>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
>> OR
>>>> THE USE OR
>>>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>>>> + */
>>>>> +
>>>>> +#include "amdgpu.h"
>>>>> +#include "amdgpu_atombios.h"
>>>>> +#include "amdgpu_i2c.h"
>>>>> +#include "atom.h"
>>>>> +#include "amd_pcie.h"
>>>>> +#include "legacy_dpm.h"
>>>>> +
>>>>> +#define amdgpu_dpm_pre_set_power_state(adev) \
>>>>> +		((adev)->powerplay.pp_funcs-
>>>>> pre_set_power_state((adev)->powerplay.pp_handle))
>>>>> +
>>>>> +#define amdgpu_dpm_post_set_power_state(adev) \
>>>>> +		((adev)->powerplay.pp_funcs-
>>>>> post_set_power_state((adev)->powerplay.pp_handle))
>>>>> +
>>>>> +#define amdgpu_dpm_display_configuration_changed(adev) \
>>>>> +		((adev)->powerplay.pp_funcs-
>>>>> display_configuration_changed((adev)->powerplay.pp_handle))
>>>>> +
>>>>> +#define amdgpu_dpm_print_power_state(adev, ps) \
>>>>> +		((adev)->powerplay.pp_funcs->print_power_state((adev)-
>>>>> powerplay.pp_handle, (ps)))
>>>>> +
>>>>> +#define amdgpu_dpm_vblank_too_short(adev) \
>>>>> +		((adev)->powerplay.pp_funcs->vblank_too_short((adev)-
>>>>> powerplay.pp_handle))
>>>>> +
>>>>> +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
>>>>> +		((adev)->powerplay.pp_funcs->check_state_equal((adev)-
>>>>> powerplay.pp_handle, (cps), (rps), (equal)))
>>>>> +
>>>>> +int amdgpu_atombios_get_memory_pll_dividers(struct
>> amdgpu_device
>>>> *adev,
>>>>> +					    u32 clock,
>>>>> +					    bool strobe_mode,
>>>>> +					    struct atom_mpll_param
>>>> *mpll_param)
>>>>> +{
>>>>> +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1 args;
>>>>> +	int index = GetIndexIntoMasterTable(COMMAND,
>>>> ComputeMemoryClockParam);
>>>>> +	u8 frev, crev;
>>>>> +
>>>>> +	memset(&args, 0, sizeof(args));
>>>>> +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
>>>>> +
>>>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
>>>>> mode_info.atom_context, index, &frev, &crev))
>>>>> +		return -EINVAL;
>>>>> +
>>>>> +	switch (frev) {
>>>>> +	case 2:
>>>>> +		switch (crev) {
>>>>> +		case 1:
>>>>> +			/* SI */
>>>>> +			args.ulClock = cpu_to_le32(clock);	/* 10 khz */
>>>>> +			args.ucInputFlag = 0;
>>>>> +			if (strobe_mode)
>>>>> +				args.ucInputFlag |=
>>>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
>>>>> +
>>>>> +			amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> +
>>>>> +			mpll_param->clkfrac =
>>>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
>>>>> +			mpll_param->clkf =
>>>> le16_to_cpu(args.ulFbDiv.usFbDiv);
>>>>> +			mpll_param->post_div = args.ucPostDiv;
>>>>> +			mpll_param->dll_speed = args.ucDllSpeed;
>>>>> +			mpll_param->bwcntl = args.ucBWCntl;
>>>>> +			mpll_param->vco_mode =
>>>>> +				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
>>>>> +			mpll_param->yclk_sel =
>>>>> +				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
>>>>> +			mpll_param->qdr =
>>>>> +				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
>>>>> +			mpll_param->half_rate =
>>>>> +				(args.ucPllCntlFlag &
>>>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
>>>>> +			break;
>>>>> +		default:
>>>>> +			return -EINVAL;
>>>>> +		}
>>>>> +		break;
>>>>> +	default:
>>>>> +		return -EINVAL;
>>>>> +	}
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +void amdgpu_atombios_set_engine_dram_timings(struct
>>>> amdgpu_device *adev,
>>>>> +					     u32 eng_clock, u32 mem_clock)
>>>>> +{
>>>>> +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
>>>>> +	int index = GetIndexIntoMasterTable(COMMAND,
>>>> DynamicMemorySettings);
>>>>> +	u32 tmp;
>>>>> +
>>>>> +	memset(&args, 0, sizeof(args));
>>>>> +
>>>>> +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
>>>>> +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
>>>>> +
>>>>> +	args.ulTargetEngineClock = cpu_to_le32(tmp);
>>>>> +	if (mem_clock)
>>>>> +		args.sReserved.ulClock = cpu_to_le32(mem_clock &
>>>> SET_CLOCK_FREQ_MASK);
>>>>> +
>>>>> +	amdgpu_atom_execute_table(adev->mode_info.atom_context,
>>>> index, (uint32_t *)&args);
>>>>> +}
>>>>> +
>>>>> +union firmware_info {
>>>>> +	ATOM_FIRMWARE_INFO info;
>>>>> +	ATOM_FIRMWARE_INFO_V1_2 info_12;
>>>>> +	ATOM_FIRMWARE_INFO_V1_3 info_13;
>>>>> +	ATOM_FIRMWARE_INFO_V1_4 info_14;
>>>>> +	ATOM_FIRMWARE_INFO_V2_1 info_21;
>>>>> +	ATOM_FIRMWARE_INFO_V2_2 info_22;
>>>>> +};
>>>>> +
>>>>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>>>> *adev,
>>>>> +					  u16 *vddc, u16 *vddci, u16 *mvdd)
>>>>> +{
>>>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
>>>>> +	u8 frev, crev;
>>>>> +	u16 data_offset;
>>>>> +	union firmware_info *firmware_info;
>>>>> +
>>>>> +	*vddc = 0;
>>>>> +	*vddci = 0;
>>>>> +	*mvdd = 0;
>>>>> +
>>>>> +	if (amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> +				   &frev, &crev, &data_offset)) {
>>>>> +		firmware_info =
>>>>> +			(union firmware_info *)(mode_info->atom_context-
>>>>> bios +
>>>>> +						data_offset);
>>>>> +		*vddc = le16_to_cpu(firmware_info-
>>>>> info_14.usBootUpVDDCVoltage);
>>>>> +		if ((frev == 2) && (crev >= 2)) {
>>>>> +			*vddci = le16_to_cpu(firmware_info-
>>>>> info_22.usBootUpVDDCIVoltage);
>>>>> +			*mvdd = le16_to_cpu(firmware_info-
>>>>> info_22.usBootUpMVDDCVoltage);
>>>>> +		}
>>>>> +	}
>>>>> +}
>>>>> +
>>>>> +union set_voltage {
>>>>> +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
>>>>> +	struct _SET_VOLTAGE_PARAMETERS v1;
>>>>> +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
>>>>> +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
>>>>> +};
>>>>> +
>>>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
>> u8
>>>> voltage_type,
>>>>> +			     u16 voltage_id, u16 *voltage)
>>>>> +{
>>>>> +	union set_voltage args;
>>>>> +	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
>>>>> +	u8 frev, crev;
>>>>> +
>>>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
>>>>> mode_info.atom_context, index, &frev, &crev))
>>>>> +		return -EINVAL;
>>>>> +
>>>>> +	switch (crev) {
>>>>> +	case 1:
>>>>> +		return -EINVAL;
>>>>> +	case 2:
>>>>> +		args.v2.ucVoltageType =
>>>> SET_VOLTAGE_GET_MAX_VOLTAGE;
>>>>> +		args.v2.ucVoltageMode = 0;
>>>>> +		args.v2.usVoltageLevel = 0;
>>>>> +
>>>>> +		amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> +
>>>>> +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
>>>>> +		break;
>>>>> +	case 3:
>>>>> +		args.v3.ucVoltageType = voltage_type;
>>>>> +		args.v3.ucVoltageMode = ATOM_GET_VOLTAGE_LEVEL;
>>>>> +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
>>>>> +
>>>>> +		amdgpu_atom_execute_table(adev-
>>>>> mode_info.atom_context, index, (uint32_t *)&args);
>>>>> +
>>>>> +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
>>>>> +		break;
>>>>> +	default:
>>>>> +		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
>>>>> +		return -EINVAL;
>>>>> +	}
>>>>> +
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +int
>> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>>>> amdgpu_device *adev,
>>>>> +						      u16 *voltage,
>>>>> +						      u16 leakage_idx)
>>>>> +{
>>>>> +	return amdgpu_atombios_get_max_vddc(adev,
>>>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
>>>>> +}
>>>>> +
>>>>> +union voltage_object_info {
>>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
>>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
>>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
>>>>> +};
>>>>> +
>>>>> +union voltage_object {
>>>>> +	struct _ATOM_VOLTAGE_OBJECT v1;
>>>>> +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
>>>>> +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
>>>>> +};
>>>>> +
>>>>> +static ATOM_VOLTAGE_OBJECT_V3
>>>>
>> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
>>>> NFO_V3_1 *v3,
>>>>> +									u8
>>>> voltage_type, u8 voltage_mode)
>>>>> +{
>>>>> +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
>>>>> +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
>>>> asVoltageObj[0]);
>>>>> +	u8 *start = (u8 *)v3;
>>>>> +
>>>>> +	while (offset < size) {
>>>>> +		ATOM_VOLTAGE_OBJECT_V3 *vo =
>>>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
>>>>> +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType ==
>>>> voltage_type) &&
>>>>> +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode ==
>>>> voltage_mode))
>>>>> +			return vo;
>>>>> +		offset += le16_to_cpu(vo-
>>>>> asGpioVoltageObj.sHeader.usSize);
>>>>> +	}
>>>>> +	return NULL;
>>>>> +}
>>>>> +
>>>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>>>> +			      u8 voltage_type,
>>>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
>>>>> +{
>>>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> +	u8 frev, crev;
>>>>> +	u16 data_offset, size;
>>>>> +	union voltage_object_info *voltage_info;
>>>>> +	union voltage_object *voltage_object = NULL;
>>>>> +
>>>>> +	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> +				   &frev, &crev, &data_offset)) {
>>>>> +		voltage_info = (union voltage_object_info *)
>>>>> +			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> +
>>>>> +		switch (frev) {
>>>>> +		case 3:
>>>>> +			switch (crev) {
>>>>> +			case 1:
>>>>> +				voltage_object = (union voltage_object *)
>>>>> +
>>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> +
>>>> voltage_type,
>>>>> +
>>>> VOLTAGE_OBJ_SVID2);
>>>>> +				if (voltage_object) {
>>>>> +					*svd_gpio_id = voltage_object-
>>>>> v3.asSVID2Obj.ucSVDGpioId;
>>>>> +					*svc_gpio_id = voltage_object-
>>>>> v3.asSVID2Obj.ucSVCGpioId;
>>>>> +				} else {
>>>>> +					return -EINVAL;
>>>>> +				}
>>>>> +				break;
>>>>> +			default:
>>>>> +				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> +				return -EINVAL;
>>>>> +			}
>>>>> +			break;
>>>>> +		default:
>>>>> +			DRM_ERROR("unknown voltage object table\n");
>>>>> +			return -EINVAL;
>>>>> +		}
>>>>> +
>>>>> +	}
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +bool
>>>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>>>> +				u8 voltage_type, u8 voltage_mode)
>>>>> +{
>>>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> +	u8 frev, crev;
>>>>> +	u16 data_offset, size;
>>>>> +	union voltage_object_info *voltage_info;
>>>>> +
>>>>> +	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> +				   &frev, &crev, &data_offset)) {
>>>>> +		voltage_info = (union voltage_object_info *)
>>>>> +			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> +
>>>>> +		switch (frev) {
>>>>> +		case 3:
>>>>> +			switch (crev) {
>>>>> +			case 1:
>>>>> +				if
>>>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> +
>>>> voltage_type, voltage_mode))
>>>>> +					return true;
>>>>> +				break;
>>>>> +			default:
>>>>> +				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> +				return false;
>>>>> +			}
>>>>> +			break;
>>>>> +		default:
>>>>> +			DRM_ERROR("unknown voltage object table\n");
>>>>> +			return false;
>>>>> +		}
>>>>> +
>>>>> +	}
>>>>> +	return false;
>>>>> +}
>>>>> +
>>>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
>> *adev,
>>>>> +				      u8 voltage_type, u8 voltage_mode,
>>>>> +				      struct atom_voltage_table *voltage_table)
>>>>> +{
>>>>> +	int index = GetIndexIntoMasterTable(DATA, VoltageObjectInfo);
>>>>> +	u8 frev, crev;
>>>>> +	u16 data_offset, size;
>>>>> +	int i;
>>>>> +	union voltage_object_info *voltage_info;
>>>>> +	union voltage_object *voltage_object = NULL;
>>>>> +
>>>>> +	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> +				   &frev, &crev, &data_offset)) {
>>>>> +		voltage_info = (union voltage_object_info *)
>>>>> +			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> +
>>>>> +		switch (frev) {
>>>>> +		case 3:
>>>>> +			switch (crev) {
>>>>> +			case 1:
>>>>> +				voltage_object = (union voltage_object *)
>>>>> +
>>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
>>>>> +
>>>> voltage_type, voltage_mode);
>>>>> +				if (voltage_object) {
>>>>> +					ATOM_GPIO_VOLTAGE_OBJECT_V3
>>>> *gpio =
>>>>> +						&voltage_object-
>>>>> v3.asGpioVoltageObj;
>>>>> +					VOLTAGE_LUT_ENTRY_V2 *lut;
>>>>> +					if (gpio->ucGpioEntryNum >
>>>> MAX_VOLTAGE_ENTRIES)
>>>>> +						return -EINVAL;
>>>>> +					lut = &gpio->asVolGpioLut[0];
>>>>> +					for (i = 0; i < gpio->ucGpioEntryNum;
>>>> i++) {
>>>>> +						voltage_table-
>>>>> entries[i].value =
>>>>> +							le16_to_cpu(lut-
>>>>> usVoltageValue);
>>>>> +						voltage_table-
>>>>> entries[i].smio_low =
>>>>> +							le32_to_cpu(lut-
>>>>> ulVoltageId);
>>>>> +						lut =
>>>> (VOLTAGE_LUT_ENTRY_V2 *)
>>>>> +							((u8 *)lut +
>>>> sizeof(VOLTAGE_LUT_ENTRY_V2));
>>>>> +					}
>>>>> +					voltage_table->mask_low =
>>>> le32_to_cpu(gpio->ulGpioMaskVal);
>>>>> +					voltage_table->count = gpio-
>>>>> ucGpioEntryNum;
>>>>> +					voltage_table->phase_delay = gpio-
>>>>> ucPhaseDelay;
>>>>> +					return 0;
>>>>> +				}
>>>>> +				break;
>>>>> +			default:
>>>>> +				DRM_ERROR("unknown voltage object
>>>> table\n");
>>>>> +				return -EINVAL;
>>>>> +			}
>>>>> +			break;
>>>>> +		default:
>>>>> +			DRM_ERROR("unknown voltage object table\n");
>>>>> +			return -EINVAL;
>>>>> +		}
>>>>> +	}
>>>>> +	return -EINVAL;
>>>>> +}
>>>>> +
>>>>> +union vram_info {
>>>>> +	struct _ATOM_VRAM_INFO_V3 v1_3;
>>>>> +	struct _ATOM_VRAM_INFO_V4 v1_4;
>>>>> +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
>>>>> +};
>>>>> +
>>>>> +#define MEM_ID_MASK           0xff000000
>>>>> +#define MEM_ID_SHIFT          24
>>>>> +#define CLOCK_RANGE_MASK      0x00ffffff
>>>>> +#define CLOCK_RANGE_SHIFT     0
>>>>> +#define LOW_NIBBLE_MASK       0xf
>>>>> +#define DATA_EQU_PREV         0
>>>>> +#define DATA_FROM_TABLE       4
>>>>> +
>>>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
>> *adev,
>>>>> +				      u8 module_index,
>>>>> +				      struct atom_mc_reg_table *reg_table)
>>>>> +{
>>>>> +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
>>>>> +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
>>>>> +	u32 i = 0, j;
>>>>> +	u16 data_offset, size;
>>>>> +	union vram_info *vram_info;
>>>>> +
>>>>> +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
>>>>> +
>>>>> +	if (amdgpu_atom_parse_data_header(adev-
>>>>> mode_info.atom_context, index, &size,
>>>>> +				   &frev, &crev, &data_offset)) {
>>>>> +		vram_info = (union vram_info *)
>>>>> +			(adev->mode_info.atom_context->bios +
>>>> data_offset);
>>>>> +		switch (frev) {
>>>>> +		case 1:
>>>>> +			DRM_ERROR("old table version %d, %d\n", frev,
>>>> crev);
>>>>> +			return -EINVAL;
>>>>> +		case 2:
>>>>> +			switch (crev) {
>>>>> +			case 1:
>>>>> +				if (module_index < vram_info-
>>>>> v2_1.ucNumOfVRAMModule) {
>>>>> +					ATOM_INIT_REG_BLOCK *reg_block
>>>> =
>>>>> +						(ATOM_INIT_REG_BLOCK *)
>>>>> +						((u8 *)vram_info +
>>>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
>>>>> +
>>>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
>>>>> +
>>>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>>>> +						((u8 *)reg_block + (2 *
>>>> sizeof(u16)) +
>>>>> +						 le16_to_cpu(reg_block-
>>>>> usRegIndexTblSize));
>>>>> +					ATOM_INIT_REG_INDEX_FORMAT
>>>> *format = &reg_block->asRegIndexBuf[0];
>>>>> +					num_entries =
>>>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
>>>>> +
>>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
>>>>> +					if (num_entries >
>>>> VBIOS_MC_REGISTER_ARRAY_SIZE)
>>>>> +						return -EINVAL;
>>>>> +					while (i < num_entries) {
>>>>> +						if (format-
>>>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
>>>>> +							break;
>>>>> +						reg_table-
>>>>> mc_reg_address[i].s1 =
>>>>> +
>>>> 	(u16)(le16_to_cpu(format->usRegIndex));
>>>>> +						reg_table-
>>>>> mc_reg_address[i].pre_reg_data =
>>>>> +							(u8)(format-
>>>>> ucPreRegDataLength);
>>>>> +						i++;
>>>>> +						format =
>>>> (ATOM_INIT_REG_INDEX_FORMAT *)
>>>>> +							((u8 *)format +
>>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
>>>>> +					}
>>>>> +					reg_table->last = i;
>>>>> +					while ((le32_to_cpu(*(u32
>>>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
>>>>> +					       (num_ranges <
>>>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
>>>>> +						t_mem_id =
>>>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
>>>>> +								>>
>>>> MEM_ID_SHIFT);
>>>>> +						if (module_index ==
>>>> t_mem_id) {
>>>>> +							reg_table-
>>>>> mc_reg_table_entry[num_ranges].mclk_max =
>>>>> +
>>>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
>>>>> +								      >>
>>>> CLOCK_RANGE_SHIFT);
>>>>> +							for (i = 0, j = 1; i <
>>>> reg_table->last; i++) {
>>>>> +								if ((reg_table-
>>>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>>>> DATA_FROM_TABLE) {
>>>>> +
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>>>> +
>>>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
>>>>> +									j++;
>>>>> +								} else if
>>>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
>>>> DATA_EQU_PREV) {
>>>>> +
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
>>>>> +
>>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
>>>>> +								}
>>>>> +							}
>>>>> +							num_ranges++;
>>>>> +						}
>>>>> +						reg_data =
>>>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
>>>>> +							((u8 *)reg_data +
>>>> le16_to_cpu(reg_block->usRegDataBlkSize));
>>>>> +					}
>>>>> +					if (le32_to_cpu(*(u32 *)reg_data) !=
>>>> END_OF_REG_DATA_BLOCK)
>>>>> +						return -EINVAL;
>>>>> +					reg_table->num_entries =
>>>> num_ranges;
>>>>> +				} else
>>>>> +					return -EINVAL;
>>>>> +				break;
>>>>> +			default:
>>>>> +				DRM_ERROR("Unknown table
>>>> version %d, %d\n", frev, crev);
>>>>> +				return -EINVAL;
>>>>> +			}
>>>>> +			break;
>>>>> +		default:
>>>>> +			DRM_ERROR("Unknown table version %d, %d\n",
>>>> frev, crev);
>>>>> +			return -EINVAL;
>>>>> +		}
>>>>> +		return 0;
>>>>> +	}
>>>>> +	return -EINVAL;
>>>>> +}
>>>>> +
>>>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
>>>>> +{
>>>>> +	const char *s;
>>>>> +
>>>>> +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
>>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
>>>>> +	default:
>>>>> +		s = "none";
>>>>> +		break;
>>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
>>>>> +		s = "battery";
>>>>> +		break;
>>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
>>>>> +		s = "balanced";
>>>>> +		break;
>>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
>>>>> +		s = "performance";
>>>>> +		break;
>>>>> +	}
>>>>> +	printk("\tui class: %s\n", s);
>>>>> +	printk("\tinternal class:");
>>>>> +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0) &&
>>>>> +	    (class2 == 0))
>>>>> +		pr_cont(" none");
>>>>> +	else {
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
>>>>> +			pr_cont(" boot");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>>>> +			pr_cont(" thermal");
>>>>> +		if (class &
>>>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
>>>>> +			pr_cont(" limited_pwr");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
>>>>> +			pr_cont(" rest");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
>>>>> +			pr_cont(" forced");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>>>> +			pr_cont(" 3d_perf");
>>>>> +		if (class &
>>>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
>>>>> +			pr_cont(" ovrdrv");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
>>>>> +			pr_cont(" uvd");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
>>>>> +			pr_cont(" 3d_low");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>>>> +			pr_cont(" acpi");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>>>> +			pr_cont(" uvd_hd2");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>>>> +			pr_cont(" uvd_hd");
>>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>>>> +			pr_cont(" uvd_sd");
>>>>> +		if (class2 &
>>>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
>>>>> +			pr_cont(" limited_pwr2");
>>>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>>>> +			pr_cont(" ulv");
>>>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
>>>>> +			pr_cont(" uvd_mvc");
>>>>> +	}
>>>>> +	pr_cont("\n");
>>>>> +}
>>>>> +
>>>>> +void amdgpu_dpm_print_cap_info(u32 caps)
>>>>> +{
>>>>> +	printk("\tcaps:");
>>>>> +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
>>>>> +		pr_cont(" single_disp");
>>>>> +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
>>>>> +		pr_cont(" video");
>>>>> +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
>>>>> +		pr_cont(" no_dc");
>>>>> +	pr_cont("\n");
>>>>> +}
>>>>> +
>>>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>>>> +				struct amdgpu_ps *rps)
>>>>> +{
>>>>> +	printk("\tstatus:");
>>>>> +	if (rps == adev->pm.dpm.current_ps)
>>>>> +		pr_cont(" c");
>>>>> +	if (rps == adev->pm.dpm.requested_ps)
>>>>> +		pr_cont(" r");
>>>>> +	if (rps == adev->pm.dpm.boot_ps)
>>>>> +		pr_cont(" b");
>>>>> +	pr_cont("\n");
>>>>> +}
>>>>> +
>>>>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
>>>>> +{
>>>>> +	int i;
>>>>> +
>>>>> +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
>>>>> +		return;
>>>>> +
>>>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
>>>>> +		amdgpu_dpm_print_power_state(adev, &adev-
>>>>> pm.dpm.ps[i]);
>>>>> +
>>>>> +}
>>>>> +
>>>>> +union power_info {
>>>>> +	struct _ATOM_POWERPLAY_INFO info;
>>>>> +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
>>>>> +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
>>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
>>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
>>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
>>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
>>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
>>>>> +};
>>>>> +
>>>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
>>>>> +{
>>>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> +	union power_info *power_info;
>>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> +	u16 data_offset;
>>>>> +	u8 frev, crev;
>>>>> +
>>>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> +				   &frev, &crev, &data_offset))
>>>>> +		return -EINVAL;
>>>>> +	power_info = (union power_info *)(mode_info->atom_context-
>>>>> bios + data_offset);
>>>>> +
>>>>> +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
>>>>> pplib.ulPlatformCaps);
>>>>> +	adev->pm.dpm.backbias_response_time =
>>>> le16_to_cpu(power_info->pplib.usBackbiasTime);
>>>>> +	adev->pm.dpm.voltage_response_time = le16_to_cpu(power_info-
>>>>> pplib.usVoltageTime);
>>>>> +
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +union fan_info {
>>>>> +	struct _ATOM_PPLIB_FANTABLE fan;
>>>>> +	struct _ATOM_PPLIB_FANTABLE2 fan2;
>>>>> +	struct _ATOM_PPLIB_FANTABLE3 fan3;
>>>>> +};
>>>>> +
>>>>> +static int amdgpu_parse_clk_voltage_dep_table(struct
>>>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
>>>>> +
>>>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
>>>>> +{
>>>>> +	u32 size = atom_table->ucNumEntries *
>>>>> +		sizeof(struct amdgpu_clock_voltage_dependency_entry);
>>>>> +	int i;
>>>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
>>>>> +
>>>>> +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
>>>>> +	if (!amdgpu_table->entries)
>>>>> +		return -ENOMEM;
>>>>> +
>>>>> +	entry = &atom_table->entries[0];
>>>>> +	for (i = 0; i < atom_table->ucNumEntries; i++) {
>>>>> +		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
>>>>> usClockLow) |
>>>>> +			(entry->ucClockHigh << 16);
>>>>> +		amdgpu_table->entries[i].v = le16_to_cpu(entry-
>>>>> usVoltage);
>>>>> +		entry = (ATOM_PPLIB_Clock_Voltage_Dependency_Record
>>>> *)
>>>>> +			((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
>>>>> +	}
>>>>> +	amdgpu_table->count = atom_table->ucNumEntries;
>>>>> +
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
>>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
>>>>> +
>>>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
>> *adev)
>>>>> +{
>>>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> +	union power_info *power_info;
>>>>> +	union fan_info *fan_info;
>>>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Table *dep_table;
>>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> +	u16 data_offset;
>>>>> +	u8 frev, crev;
>>>>> +	int ret, i;
>>>>> +
>>>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> +				   &frev, &crev, &data_offset))
>>>>> +		return -EINVAL;
>>>>> +	power_info = (union power_info *)(mode_info->atom_context-
>>>>> bios + data_offset);
>>>>> +
>>>>> +	/* fan table */
>>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>>>> +		if (power_info->pplib3.usFanTableOffset) {
>>>>> +			fan_info = (union fan_info *)(mode_info-
>>>>> atom_context->bios + data_offset +
>>>>> +						      le16_to_cpu(power_info-
>>>>> pplib3.usFanTableOffset));
>>>>> +			adev->pm.dpm.fan.t_hyst = fan_info->fan.ucTHyst;
>>>>> +			adev->pm.dpm.fan.t_min = le16_to_cpu(fan_info-
>>>>> fan.usTMin);
>>>>> +			adev->pm.dpm.fan.t_med = le16_to_cpu(fan_info-
>>>>> fan.usTMed);
>>>>> +			adev->pm.dpm.fan.t_high = le16_to_cpu(fan_info-
>>>>> fan.usTHigh);
>>>>> +			adev->pm.dpm.fan.pwm_min =
>>>> le16_to_cpu(fan_info->fan.usPWMMin);
>>>>> +			adev->pm.dpm.fan.pwm_med =
>>>> le16_to_cpu(fan_info->fan.usPWMMed);
>>>>> +			adev->pm.dpm.fan.pwm_high =
>>>> le16_to_cpu(fan_info->fan.usPWMHigh);
>>>>> +			if (fan_info->fan.ucFanTableFormat >= 2)
>>>>> +				adev->pm.dpm.fan.t_max =
>>>> le16_to_cpu(fan_info->fan2.usTMax);
>>>>> +			else
>>>>> +				adev->pm.dpm.fan.t_max = 10900;
>>>>> +			adev->pm.dpm.fan.cycle_delay = 100000;
>>>>> +			if (fan_info->fan.ucFanTableFormat >= 3) {
>>>>> +				adev->pm.dpm.fan.control_mode =
>>>> fan_info->fan3.ucFanControlMode;
>>>>> +				adev->pm.dpm.fan.default_max_fan_pwm
>>>> =
>>>>> +					le16_to_cpu(fan_info-
>>>>> fan3.usFanPWMMax);
>>>>> +				adev-
>>>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
>>>>> +				adev->pm.dpm.fan.fan_output_sensitivity =
>>>>> +					le16_to_cpu(fan_info-
>>>>> fan3.usFanOutputSensitivity);
>>>>> +			}
>>>>> +			adev->pm.dpm.fan.ucode_fan_control = true;
>>>>> +		}
>>>>> +	}
>>>>> +
>>>>> +	/* clock dependancy tables, shedding tables */
>>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
>>>>> +		if (power_info->pplib4.usVddcDependencyOnSCLKOffset) {
>>>>> +			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcDependencyOnSCLKOffset));
>>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
>>>>> +								 dep_table);
>>>>> +			if (ret) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return ret;
>>>>> +			}
>>>>> +		}
>>>>> +		if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
>>>>> +			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usVddciDependencyOnMCLKOffset));
>>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
>>>>> +								 dep_table);
>>>>> +			if (ret) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return ret;
>>>>> +			}
>>>>> +		}
>>>>> +		if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
>>>>> +			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcDependencyOnMCLKOffset));
>>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
>>>>> +								 dep_table);
>>>>> +			if (ret) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return ret;
>>>>> +			}
>>>>> +		}
>>>>> +		if (power_info->pplib4.usMvddDependencyOnMCLKOffset)
>>>> {
>>>>> +			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usMvddDependencyOnMCLKOffset));
>>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(&adev-
>>>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
>>>>> +								 dep_table);
>>>>> +			if (ret) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return ret;
>>>>> +			}
>>>>> +		}
>>>>> +		if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
>>>>> +			ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
>>>>> +				(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usMaxClockVoltageOnDCOffset));
>>>>> +			if (clk_v->ucNumEntries) {
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
>>>>> +					le16_to_cpu(clk_v-
>>>>> entries[0].usSclkLow) |
>>>>> +					(clk_v->entries[0].ucSclkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
>>>>> +					le16_to_cpu(clk_v-
>>>>> entries[0].usMclkLow) |
>>>>> +					(clk_v->entries[0].ucMclkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
>>>>> +					le16_to_cpu(clk_v-
>>>>> entries[0].usVddc);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
>>>>> +					le16_to_cpu(clk_v-
>>>>> entries[0].usVddci);
>>>>> +			}
>>>>> +		}
>>>>> +		if (power_info->pplib4.usVddcPhaseShedLimitsTableOffset)
>>>> {
>>>>> +			ATOM_PPLIB_PhaseSheddingLimits_Table *psl =
>>>>> +				(ATOM_PPLIB_PhaseSheddingLimits_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib4.usVddcPhaseShedLimitsTableOffset));
>>>>> +			ATOM_PPLIB_PhaseSheddingLimits_Record *entry;
>>>>> +
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
>>>>> +				kcalloc(psl->ucNumEntries,
>>>>> +					sizeof(struct
>>>> amdgpu_phase_shedding_limits_entry),
>>>>> +					GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +
>>>>> +			entry = &psl->entries[0];
>>>>> +			for (i = 0; i < psl->ucNumEntries; i++) {
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
>>>>> +					le16_to_cpu(entry->usSclkLow) |
>>>> (entry->ucSclkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
>>>>> +					le16_to_cpu(entry->usMclkLow) |
>>>> (entry->ucMclkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
>>>>> +					le16_to_cpu(entry->usVoltage);
>>>>> +				entry =
>>>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
>>>>> +			}
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
>>>>> +				psl->ucNumEntries;
>>>>> +		}
>>>>> +	}
>>>>> +
>>>>> +	/* cac data */
>>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
>>>>> +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
>>>>> pplib5.ulTDPLimit);
>>>>> +		adev->pm.dpm.near_tdp_limit = le32_to_cpu(power_info-
>>>>> pplib5.ulNearTDPLimit);
>>>>> +		adev->pm.dpm.near_tdp_limit_adjusted = adev-
>>>>> pm.dpm.near_tdp_limit;
>>>>> +		adev->pm.dpm.tdp_od_limit = le16_to_cpu(power_info-
>>>>> pplib5.usTDPODLimit);
>>>>> +		if (adev->pm.dpm.tdp_od_limit)
>>>>> +			adev->pm.dpm.power_control = true;
>>>>> +		else
>>>>> +			adev->pm.dpm.power_control = false;
>>>>> +		adev->pm.dpm.tdp_adjustment = 0;
>>>>> +		adev->pm.dpm.sq_ramping_threshold =
>>>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
>>>>> +		adev->pm.dpm.cac_leakage = le32_to_cpu(power_info-
>>>>> pplib5.ulCACLeakage);
>>>>> +		adev->pm.dpm.load_line_slope = le16_to_cpu(power_info-
>>>>> pplib5.usLoadLineSlope);
>>>>> +		if (power_info->pplib5.usCACLeakageTableOffset) {
>>>>> +			ATOM_PPLIB_CAC_Leakage_Table *cac_table =
>>>>> +				(ATOM_PPLIB_CAC_Leakage_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(power_info-
>>>>> pplib5.usCACLeakageTableOffset));
>>>>> +			ATOM_PPLIB_CAC_Leakage_Record *entry;
>>>>> +			u32 size = cac_table->ucNumEntries * sizeof(struct
>>>> amdgpu_cac_leakage_table);
>>>>> +			adev->pm.dpm.dyn_state.cac_leakage_table.entries
>>>> = kzalloc(size, GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			entry = &cac_table->entries[0];
>>>>> +			for (i = 0; i < cac_table->ucNumEntries; i++) {
>>>>> +				if (adev->pm.dpm.platform_caps &
>>>> ATOM_PP_PLATFORM_CAP_EVV) {
>>>>> +					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
>>>>> +						le16_to_cpu(entry-
>>>>> usVddc1);
>>>>> +					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
>>>>> +						le16_to_cpu(entry-
>>>>> usVddc2);
>>>>> +					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
>>>>> +						le16_to_cpu(entry-
>>>>> usVddc3);
>>>>> +				} else {
>>>>> +					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
>>>>> +						le16_to_cpu(entry->usVddc);
>>>>> +					adev-
>>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
>>>>> +						le32_to_cpu(entry-
>>>>> ulLeakageValue);
>>>>> +				}
>>>>> +				entry = (ATOM_PPLIB_CAC_Leakage_Record
>>>> *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
>>>>> +			}
>>>>> +			adev->pm.dpm.dyn_state.cac_leakage_table.count
>>>> = cac_table->ucNumEntries;
>>>>> +		}
>>>>> +	}
>>>>> +
>>>>> +	/* ext tables */
>>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
>>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
>>>>> +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
>>>> (ATOM_PPLIB_EXTENDEDHEADER *)
>>>>> +			(mode_info->atom_context->bios + data_offset +
>>>>> +			 le16_to_cpu(power_info-
>>>>> pplib3.usExtendendedHeaderOffset));
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
>>>>> +			ext_hdr->usVCETableOffset) {
>>>>> +			VCEClockInfoArray *array = (VCEClockInfoArray *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1);
>>>>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
>>>> *limits =
>>>>> +
>>>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1 +
>>>>> +				 1 + array->ucNumEntries *
>>>> sizeof(VCEClockInfo));
>>>>> +			ATOM_PPLIB_VCE_State_Table *states =
>>>>> +				(ATOM_PPLIB_VCE_State_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usVCETableOffset) +
>>>> 1 +
>>>>> +				 1 + (array->ucNumEntries * sizeof
>>>> (VCEClockInfo)) +
>>>>> +				 1 + (limits->numEntries *
>>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
>>>>> +			ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
>>>> *entry;
>>>>> +			ATOM_PPLIB_VCE_State_Record *state_entry;
>>>>> +			VCEClockInfo *vce_clk;
>>>>> +			u32 size = limits->numEntries *
>>>>> +				sizeof(struct
>>>> amdgpu_vce_clock_voltage_dependency_entry);
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
>>>>> +				kzalloc(size, GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
>>>>> +				limits->numEntries;
>>>>> +			entry = &limits->entries[0];
>>>>> +			state_entry = &states->entries[0];
>>>>> +			for (i = 0; i < limits->numEntries; i++) {
>>>>> +				vce_clk = (VCEClockInfo *)
>>>>> +					((u8 *)&array->entries[0] +
>>>>> +					 (entry->ucVCEClockInfoIndex *
>>>> sizeof(VCEClockInfo)));
>>>>> +				adev-
>>>>>
>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
>>>> =
>>>>> +					le16_to_cpu(vce_clk->usEVClkLow) |
>>>> (vce_clk->ucEVClkHigh << 16);
>>>>> +				adev-
>>>>>
>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
>>>> =
>>>>> +					le16_to_cpu(vce_clk->usECClkLow) |
>>>> (vce_clk->ucECClkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
>>>>> +					le16_to_cpu(entry->usVoltage);
>>>>> +				entry =
>>>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
>>>>> +			}
>>>>> +			adev->pm.dpm.num_of_vce_states =
>>>>> +					states->numEntries >
>>>> AMD_MAX_VCE_LEVELS ?
>>>>> +					AMD_MAX_VCE_LEVELS : states-
>>>>> numEntries;
>>>>> +			for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++)
>>>> {
>>>>> +				vce_clk = (VCEClockInfo *)
>>>>> +					((u8 *)&array->entries[0] +
>>>>> +					 (state_entry->ucVCEClockInfoIndex
>>>> * sizeof(VCEClockInfo)));
>>>>> +				adev->pm.dpm.vce_states[i].evclk =
>>>>> +					le16_to_cpu(vce_clk->usEVClkLow) |
>>>> (vce_clk->ucEVClkHigh << 16);
>>>>> +				adev->pm.dpm.vce_states[i].ecclk =
>>>>> +					le16_to_cpu(vce_clk->usECClkLow) |
>>>> (vce_clk->ucECClkHigh << 16);
>>>>> +				adev->pm.dpm.vce_states[i].clk_idx =
>>>>> +					state_entry->ucClockInfoIndex &
>>>> 0x3f;
>>>>> +				adev->pm.dpm.vce_states[i].pstate =
>>>>> +					(state_entry->ucClockInfoIndex &
>>>> 0xc0) >> 6;
>>>>> +				state_entry =
>>>> (ATOM_PPLIB_VCE_State_Record *)
>>>>> +					((u8 *)state_entry +
>>>> sizeof(ATOM_PPLIB_VCE_State_Record));
>>>>> +			}
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
>>>>> +			ext_hdr->usUVDTableOffset) {
>>>>> +			UVDClockInfoArray *array = (UVDClockInfoArray *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>>>> 1);
>>>>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
>>>> *limits =
>>>>> +
>>>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usUVDTableOffset) +
>>>> 1 +
>>>>> +				 1 + (array->ucNumEntries * sizeof
>>>> (UVDClockInfo)));
>>>>> +			ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
>>>> *entry;
>>>>> +			u32 size = limits->numEntries *
>>>>> +				sizeof(struct
>>>> amdgpu_uvd_clock_voltage_dependency_entry);
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
>>>>> +				kzalloc(size, GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
>>>>> +				limits->numEntries;
>>>>> +			entry = &limits->entries[0];
>>>>> +			for (i = 0; i < limits->numEntries; i++) {
>>>>> +				UVDClockInfo *uvd_clk = (UVDClockInfo *)
>>>>> +					((u8 *)&array->entries[0] +
>>>>> +					 (entry->ucUVDClockInfoIndex *
>>>> sizeof(UVDClockInfo)));
>>>>> +				adev-
>>>>>
>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk =
>>>>> +					le16_to_cpu(uvd_clk->usVClkLow) |
>>>> (uvd_clk->ucVClkHigh << 16);
>>>>> +				adev-
>>>>>
>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
>>>>> +					le16_to_cpu(uvd_clk->usDClkLow) |
>>>> (uvd_clk->ucDClkHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
>>>>> +					le16_to_cpu(entry->usVoltage);
>>>>> +				entry =
>>>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
>>>>> +			}
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
>>>>> +			ext_hdr->usSAMUTableOffset) {
>>>>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table *limits =
>>>>> +				(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
>>>> *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usSAMUTableOffset)
>>>> + 1);
>>>>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
>>>>> +			u32 size = limits->numEntries *
>>>>> +				sizeof(struct
>>>> amdgpu_clock_voltage_dependency_entry);
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
>>>>> +				kzalloc(size, GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
>>>>> +				limits->numEntries;
>>>>> +			entry = &limits->entries[0];
>>>>> +			for (i = 0; i < limits->numEntries; i++) {
>>>>> +				adev-
>>>>>
>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk =
>>>>> +					le16_to_cpu(entry->usSAMClockLow)
>>>> | (entry->ucSAMClockHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
>> =
>>>>> +					le16_to_cpu(entry->usVoltage);
>>>>> +				entry =
>>>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
>>>>> +			}
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
>>>>> +		    ext_hdr->usPPMTableOffset) {
>>>>> +			ATOM_PPLIB_PPM_Table *ppm =
>>>> (ATOM_PPLIB_PPM_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usPPMTableOffset));
>>>>> +			adev->pm.dpm.dyn_state.ppm_table =
>>>>> +				kzalloc(sizeof(struct amdgpu_ppm_table),
>>>> GFP_KERNEL);
>>>>> +			if (!adev->pm.dpm.dyn_state.ppm_table) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			adev->pm.dpm.dyn_state.ppm_table->ppm_design
>>>> = ppm->ucPpmDesign;
>>>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>>>> cpu_core_number =
>>>>> +				le16_to_cpu(ppm->usCpuCoreNumber);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>>>> platform_tdp =
>>>>> +				le32_to_cpu(ppm->ulPlatformTDP);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>>>> small_ac_platform_tdp =
>>>>> +				le32_to_cpu(ppm->ulSmallACPlatformTDP);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table->platform_tdc
>>>> =
>>>>> +				le32_to_cpu(ppm->ulPlatformTDC);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>>>> small_ac_platform_tdc =
>>>>> +				le32_to_cpu(ppm->ulSmallACPlatformTDC);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table->apu_tdp =
>>>>> +				le32_to_cpu(ppm->ulApuTDP);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table->dgpu_tdp =
>>>>> +				le32_to_cpu(ppm->ulDGpuTDP);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table-
>>>>> dgpu_ulv_power =
>>>>> +				le32_to_cpu(ppm->ulDGpuUlvPower);
>>>>> +			adev->pm.dpm.dyn_state.ppm_table->tj_max =
>>>>> +				le32_to_cpu(ppm->ulTjmax);
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
>>>>> +			ext_hdr->usACPTableOffset) {
>>>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table *limits =
>>>>> +				(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
>>>> *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr->usACPTableOffset) +
>>>> 1);
>>>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record *entry;
>>>>> +			u32 size = limits->numEntries *
>>>>> +				sizeof(struct
>>>> amdgpu_clock_voltage_dependency_entry);
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
>>>>> +				kzalloc(size, GFP_KERNEL);
>>>>> +			if (!adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
>>>>> +				limits->numEntries;
>>>>> +			entry = &limits->entries[0];
>>>>> +			for (i = 0; i < limits->numEntries; i++) {
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
>> =
>>>>> +					le16_to_cpu(entry->usACPClockLow)
>>>> | (entry->ucACPClockHigh << 16);
>>>>> +				adev-
>>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
>>>>> +					le16_to_cpu(entry->usVoltage);
>>>>> +				entry =
>>>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
>>>>> +					((u8 *)entry +
>>>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
>>>>> +			}
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
>>>>> +			ext_hdr->usPowerTuneTableOffset) {
>>>>> +			u8 rev = *(u8 *)(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> +			ATOM_PowerTune_Table *pt;
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table =
>>>>> +				kzalloc(sizeof(struct amdgpu_cac_tdp_table),
>>>> GFP_KERNEL);
>>>>> +			if (!adev->pm.dpm.dyn_state.cac_tdp_table) {
>>>>> +
>>>> 	amdgpu_free_extended_power_table(adev);
>>>>> +				return -ENOMEM;
>>>>> +			}
>>>>> +			if (rev > 0) {
>>>>> +				ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
>>>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
>>>>> +					(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> maximum_power_delivery_limit =
>>>>> +					ppt->usMaximumPowerDeliveryLimit;
>>>>> +				pt = &ppt->power_tune_table;
>>>>> +			} else {
>>>>> +				ATOM_PPLIB_POWERTUNE_Table *ppt =
>>>> (ATOM_PPLIB_POWERTUNE_Table *)
>>>>> +					(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +					 le16_to_cpu(ext_hdr-
>>>>> usPowerTuneTableOffset));
>>>>> +				adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> maximum_power_delivery_limit = 255;
>>>>> +				pt = &ppt->power_tune_table;
>>>>> +			}
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdp =
>>>> le16_to_cpu(pt->usTDP);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> configurable_tdp =
>>>>> +				le16_to_cpu(pt->usConfigurableTDP);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table->tdc =
>>>> le16_to_cpu(pt->usTDC);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> battery_power_limit =
>>>>> +				le16_to_cpu(pt->usBatteryPowerLimit);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> small_power_limit =
>>>>> +				le16_to_cpu(pt->usSmallPowerLimit);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> low_cac_leakage =
>>>>> +				le16_to_cpu(pt->usLowCACLeakage);
>>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
>>>>> high_cac_leakage =
>>>>> +				le16_to_cpu(pt->usHighCACLeakage);
>>>>> +		}
>>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
>>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
>>>>> +				ext_hdr->usSclkVddgfxTableOffset) {
>>>>> +			dep_table =
>>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
>>>>> +				(mode_info->atom_context->bios +
>>>> data_offset +
>>>>> +				 le16_to_cpu(ext_hdr-
>>>>> usSclkVddgfxTableOffset));
>>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(
>>>>> +					&adev-
>>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
>>>>> +					dep_table);
>>>>> +			if (ret) {
>>>>> +				kfree(adev-
>>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
>>>>> +				return ret;
>>>>> +			}
>>>>> +		}
>>>>> +	}
>>>>> +
>>>>> +	return 0;
>>>>> +}
>>>>> +
>>>>> +void amdgpu_free_extended_power_table(struct amdgpu_device
>> *adev)
>>>>> +{
>>>>> +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
>>>>> pm.dpm.dyn_state;
>>>>> +
>>>>> +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
>>>>> +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
>>>>> +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
>>>>> +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
>>>>> +	kfree(dyn_state->cac_leakage_table.entries);
>>>>> +	kfree(dyn_state->phase_shedding_limits_table.entries);
>>>>> +	kfree(dyn_state->ppm_table);
>>>>> +	kfree(dyn_state->cac_tdp_table);
>>>>> +	kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
>>>>> +	kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
>>>>> +	kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
>>>>> +	kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
>>>>> +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
>>>>> +}
>>>>> +
>>>>> +static const char *pp_lib_thermal_controller_names[] = {
>>>>> +	"NONE",
>>>>> +	"lm63",
>>>>> +	"adm1032",
>>>>> +	"adm1030",
>>>>> +	"max6649",
>>>>> +	"lm64",
>>>>> +	"f75375",
>>>>> +	"RV6xx",
>>>>> +	"RV770",
>>>>> +	"adt7473",
>>>>> +	"NONE",
>>>>> +	"External GPIO",
>>>>> +	"Evergreen",
>>>>> +	"emc2103",
>>>>> +	"Sumo",
>>>>> +	"Northern Islands",
>>>>> +	"Southern Islands",
>>>>> +	"lm96163",
>>>>> +	"Sea Islands",
>>>>> +	"Kaveri/Kabini",
>>>>> +};
>>>>> +
>>>>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
>>>>> +{
>>>>> +	struct amdgpu_mode_info *mode_info = &adev->mode_info;
>>>>> +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
>>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
>>>>> +	ATOM_PPLIB_THERMALCONTROLLER *controller;
>>>>> +	struct amdgpu_i2c_bus_rec i2c_bus;
>>>>> +	u16 data_offset;
>>>>> +	u8 frev, crev;
>>>>> +
>>>>> +	if (!amdgpu_atom_parse_data_header(mode_info->atom_context,
>>>> index, NULL,
>>>>> +				   &frev, &crev, &data_offset))
>>>>> +		return;
>>>>> +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
>>>>> +		(mode_info->atom_context->bios + data_offset);
>>>>> +	controller = &power_table->sThermalController;
>>>>> +
>>>>> +	/* add the i2c bus for thermal/fan chip */
>>>>> +	if (controller->ucType > 0) {
>>>>> +		if (controller->ucFanParameters &
>>>> ATOM_PP_FANPARAMETERS_NOFAN)
>>>>> +			adev->pm.no_fan = true;
>>>>> +		adev->pm.fan_pulses_per_revolution =
>>>>> +			controller->ucFanParameters &
>>>>
>> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
>>>> ASK;
>>>>> +		if (adev->pm.fan_pulses_per_revolution) {
>>>>> +			adev->pm.fan_min_rpm = controller->ucFanMinRPM;
>>>>> +			adev->pm.fan_max_rpm = controller-
>>>>> ucFanMaxRPM;
>>>>> +		}
>>>>> +		if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_RV6XX;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_RV770) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_RV770;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EVERGREEN;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_SUMO) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_SUMO;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_NI;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_SI;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_CI;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
>>>>> +			DRM_INFO("Internal thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type = THERMAL_TYPE_KV;
>>>>> +		} else if (controller->ucType ==
>>>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
>>>>> +			DRM_INFO("External GPIO thermal controller %s fan
>>>> control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EXTERNAL_GPIO;
>>>>> +		} else if (controller->ucType ==
>>>>> +
>>>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
>>>>> +			DRM_INFO("ADT7473 with internal thermal
>>>> controller %s fan control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
>>>>> +		} else if (controller->ucType ==
>>>>> +
>>>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
>>>>> +			DRM_INFO("EMC2103 with internal thermal
>>>> controller %s fan control\n",
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
>>>>> +		} else if (controller->ucType <
>>>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
>>>>> +			DRM_INFO("Possible %s thermal controller at
>>>> 0x%02x %s fan control\n",
>>>>> +
>>>> pp_lib_thermal_controller_names[controller->ucType],
>>>>> +				 controller->ucI2cAddress >> 1,
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +			adev->pm.int_thermal_type =
>>>> THERMAL_TYPE_EXTERNAL;
>>>>> +			i2c_bus = amdgpu_atombios_lookup_i2c_gpio(adev,
>>>> controller->ucI2cLine);
>>>>> +			adev->pm.i2c_bus = amdgpu_i2c_lookup(adev,
>>>> &i2c_bus);
>>>>> +			if (adev->pm.i2c_bus) {
>>>>> +				struct i2c_board_info info = { };
>>>>> +				const char *name =
>>>> pp_lib_thermal_controller_names[controller->ucType];
>>>>> +				info.addr = controller->ucI2cAddress >> 1;
>>>>> +				strlcpy(info.type, name, sizeof(info.type));
>>>>> +				i2c_new_client_device(&adev->pm.i2c_bus-
>>>>> adapter, &info);
>>>>> +			}
>>>>> +		} else {
>>>>> +			DRM_INFO("Unknown thermal controller type %d at
>>>> 0x%02x %s fan control\n",
>>>>> +				 controller->ucType,
>>>>> +				 controller->ucI2cAddress >> 1,
>>>>> +				 (controller->ucFanParameters &
>>>>> +				  ATOM_PP_FANPARAMETERS_NOFAN) ?
>>>> "without" : "with");
>>>>> +		}
>>>>> +	}
>>>>> +}
>>>>> +
>>>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
>> u32
>>>> idx)
>>>>> +{
>>>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>>>> +
>>>>> +	if (idx < adev->pm.dpm.num_of_vce_states)
>>>>> +		return &adev->pm.dpm.vce_states[idx];
>>>>> +
>>>>> +	return NULL;
>>>>> +}
>>>>> +
>>>>> +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
>>>> amdgpu_device *adev,
>>>>> +						     enum
>>>> amd_pm_state_type dpm_state)
>>>>> +{
>>>>> +	int i;
>>>>> +	struct amdgpu_ps *ps;
>>>>> +	u32 ui_class;
>>>>> +	bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ?
>>>>> +		true : false;
>>>>> +
>>>>> +	/* check if the vblank period is too short to adjust the mclk */
>>>>> +	if (single_display && adev->powerplay.pp_funcs->vblank_too_short)
>>>> {
>>>>> +		if (amdgpu_dpm_vblank_too_short(adev))
>>>>> +			single_display = false;
>>>>> +	}
>>>>> +
>>>>> +	/* certain older asics have a separare 3D performance state,
>>>>> +	 * so try that first if the user selected performance
>>>>> +	 */
>>>>> +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
>>>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
>>>>> +	/* balanced states don't exist at the moment */
>>>>> +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
>>>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> +
>>>>> +restart_search:
>>>>> +	/* Pick the best power state based on current conditions */
>>>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
>>>>> +		ps = &adev->pm.dpm.ps[i];
>>>>> +		ui_class = ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
>>>>> +		switch (dpm_state) {
>>>>> +		/* user states */
>>>>> +		case POWER_STATE_TYPE_BATTERY:
>>>>> +			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
>>>>> +				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> +					if (single_display)
>>>>> +						return ps;
>>>>> +				} else
>>>>> +					return ps;
>>>>> +			}
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_BALANCED:
>>>>> +			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
>>>>> +				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> +					if (single_display)
>>>>> +						return ps;
>>>>> +				} else
>>>>> +					return ps;
>>>>> +			}
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_PERFORMANCE:
>>>>> +			if (ui_class ==
>>>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
>>>>> +				if (ps->caps &
>>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
>>>>> +					if (single_display)
>>>>> +						return ps;
>>>>> +				} else
>>>>> +					return ps;
>>>>> +			}
>>>>> +			break;
>>>>> +		/* internal states */
>>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD:
>>>>> +			if (adev->pm.dpm.uvd_ps)
>>>>> +				return adev->pm.dpm.uvd_ps;
>>>>> +			else
>>>>> +				break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>>>> +			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>>>> +			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>>>> +			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>>>> +			if (ps->class2 &
>>>> ATOM_PPLIB_CLASSIFICATION2_MVC)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_BOOT:
>>>>> +			return adev->pm.dpm.boot_ps;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>>>> +			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_ACPI:
>>>>> +			if (ps->class & ATOM_PPLIB_CLASSIFICATION_ACPI)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_ULV:
>>>>> +			if (ps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>>>> +			if (ps->class &
>>>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
>>>>> +				return ps;
>>>>> +			break;
>>>>> +		default:
>>>>> +			break;
>>>>> +		}
>>>>> +	}
>>>>> +	/* use a fallback state if we didn't match */
>>>>> +	switch (dpm_state) {
>>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
>>>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD;
>>>>> +		goto restart_search;
>>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
>>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
>>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
>>>>> +		if (adev->pm.dpm.uvd_ps) {
>>>>> +			return adev->pm.dpm.uvd_ps;
>>>>> +		} else {
>>>>> +			dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> +			goto restart_search;
>>>>> +		}
>>>>> +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
>>>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
>>>>> +		goto restart_search;
>>>>> +	case POWER_STATE_TYPE_INTERNAL_ACPI:
>>>>> +		dpm_state = POWER_STATE_TYPE_BATTERY;
>>>>> +		goto restart_search;
>>>>> +	case POWER_STATE_TYPE_BATTERY:
>>>>> +	case POWER_STATE_TYPE_BALANCED:
>>>>> +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
>>>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
>>>>> +		goto restart_search;
>>>>> +	default:
>>>>> +		break;
>>>>> +	}
>>>>> +
>>>>> +	return NULL;
>>>>> +}
>>>>> +
>>>>> +int amdgpu_dpm_change_power_state_locked(void *handle)
>>>>> +{
>>>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>>>> +	struct amdgpu_ps *ps;
>>>>> +	enum amd_pm_state_type dpm_state;
>>>>> +	int ret;
>>>>> +	bool equal = false;
>>>>> +
>>>>> +	/* if dpm init failed */
>>>>> +	if (!adev->pm.dpm_enabled)
>>>>> +		return 0;
>>>>> +
>>>>> +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
>>>>> +		/* add other state override checks here */
>>>>> +		if ((!adev->pm.dpm.thermal_active) &&
>>>>> +		    (!adev->pm.dpm.uvd_active))
>>>>> +			adev->pm.dpm.state = adev->pm.dpm.user_state;
>>>>> +	}
>>>>> +	dpm_state = adev->pm.dpm.state;
>>>>> +
>>>>> +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
>>>>> +	if (ps)
>>>>> +		adev->pm.dpm.requested_ps = ps;
>>>>> +	else
>>>>> +		return -EINVAL;
>>>>> +
>>>>> +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
>>>>> print_power_state) {
>>>>> +		printk("switching from power state:\n");
>>>>> +		amdgpu_dpm_print_power_state(adev, adev-
>>>>> pm.dpm.current_ps);
>>>>> +		printk("switching to power state:\n");
>>>>> +		amdgpu_dpm_print_power_state(adev, adev-
>>>>> pm.dpm.requested_ps);
>>>>> +	}
>>>>> +
>>>>> +	/* update whether vce is active */
>>>>> +	ps->vce_active = adev->pm.dpm.vce_active;
>>>>> +	if (adev->powerplay.pp_funcs->display_configuration_changed)
>>>>> +		amdgpu_dpm_display_configuration_changed(adev);
>>>>> +
>>>>> +	ret = amdgpu_dpm_pre_set_power_state(adev);
>>>>> +	if (ret)
>>>>> +		return ret;
>>>>> +
>>>>> +	if (adev->powerplay.pp_funcs->check_state_equal) {
>>>>> +		if (0 != amdgpu_dpm_check_state_equal(adev, adev-
>>>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
>>>>> +			equal = false;
>>>>> +	}
>>>>> +
>>>>> +	if (equal)
>>>>> +		return 0;
>>>>> +
>>>>> +	if (adev->powerplay.pp_funcs->set_power_state)
>>>>> +		adev->powerplay.pp_funcs->set_power_state(adev-
>>>>> powerplay.pp_handle);
>>>>> +
>>>>> +	amdgpu_dpm_post_set_power_state(adev);
>>>>> +
>>>>> +	adev->pm.dpm.current_active_crtcs = adev-
>>>>> pm.dpm.new_active_crtcs;
>>>>> +	adev->pm.dpm.current_active_crtc_count = adev-
>>>>> pm.dpm.new_active_crtc_count;
>>>>> +
>>>>> +	if (adev->powerplay.pp_funcs->force_performance_level) {
>>>>> +		if (adev->pm.dpm.thermal_active) {
>>>>> +			enum amd_dpm_forced_level level = adev-
>>>>> pm.dpm.forced_level;
>>>>> +			/* force low perf level for thermal */
>>>>> +			amdgpu_dpm_force_performance_level(adev,
>>>> AMD_DPM_FORCED_LEVEL_LOW);
>>>>> +			/* save the user's level */
>>>>> +			adev->pm.dpm.forced_level = level;
>>>>> +		} else {
>>>>> +			/* otherwise, user selected level */
>>>>> +			amdgpu_dpm_force_performance_level(adev,
>>>> adev->pm.dpm.forced_level);
>>>>> +		}
>>>>> +	}
>>>>> +
>>>>> +	return 0;
>>>>> +}
>>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>>>> new file mode 100644
>>>>> index 000000000000..4adc765c8824
>>>>> --- /dev/null
>>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
>>>>> @@ -0,0 +1,70 @@
>>>>> +/*
>>>>> + * Copyright 2021 Advanced Micro Devices, Inc.
>>>>> + *
>>>>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>>>>> + * copy of this software and associated documentation files (the
>>>> "Software"),
>>>>> + * to deal in the Software without restriction, including without
>> limitation
>>>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>>>> + * and/or sell copies of the Software, and to permit persons to whom
>> the
>>>>> + * Software is furnished to do so, subject to the following conditions:
>>>>> + *
>>>>> + * The above copyright notice and this permission notice shall be
>> included
>>>> in
>>>>> + * all copies or substantial portions of the Software.
>>>>> + *
>>>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>>>> KIND, EXPRESS OR
>>>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>>> MERCHANTABILITY,
>>>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>>>> NO EVENT SHALL
>>>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>>>> DAMAGES OR
>>>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>>>> OTHERWISE,
>>>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
>> OR
>>>> THE USE OR
>>>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>>>> + *
>>>>> + */
>>>>> +#ifndef __LEGACY_DPM_H__
>>>>> +#define __LEGACY_DPM_H__
>>>>> +
>>>>> +int amdgpu_atombios_get_memory_pll_dividers(struct
>> amdgpu_device
>>>> *adev,
>>>>> +					    u32 clock,
>>>>> +					    bool strobe_mode,
>>>>> +					    struct atom_mpll_param
>>>> *mpll_param);
>>>>> +
>>>>> +void amdgpu_atombios_set_engine_dram_timings(struct
>>>> amdgpu_device *adev,
>>>>> +					     u32 eng_clock, u32 mem_clock);
>>>>> +
>>>>> +void amdgpu_atombios_get_default_voltages(struct amdgpu_device
>>>> *adev,
>>>>> +					  u16 *vddc, u16 *vddci, u16 *mvdd);
>>>>> +
>>>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
>> u8
>>>> voltage_type,
>>>>> +			     u16 voltage_id, u16 *voltage);
>>>>> +
>>>>> +int
>> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
>>>> amdgpu_device *adev,
>>>>> +						      u16 *voltage,
>>>>> +						      u16 leakage_idx);
>>>>> +
>>>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
>>>>> +			      u8 voltage_type,
>>>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
>>>>> +
>>>>> +bool
>>>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
>>>>> +				u8 voltage_type, u8 voltage_mode);
>>>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
>> *adev,
>>>>> +				      u8 voltage_type, u8 voltage_mode,
>>>>> +				      struct atom_voltage_table
>>>> *voltage_table);
>>>>> +
>>>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
>> *adev,
>>>>> +				      u8 module_index,
>>>>> +				      struct atom_mc_reg_table *reg_table);
>>>>> +
>>>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
>>>>> +void amdgpu_dpm_print_cap_info(u32 caps);
>>>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
>>>>> +				struct amdgpu_ps *rps);
>>>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
>>>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
>>>> *adev);
>>>>> +void amdgpu_free_extended_power_table(struct amdgpu_device
>>>> *adev);
>>>>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
>>>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
>> u32
>>>> idx);
>>>>> +int amdgpu_dpm_change_power_state_locked(void *handle);
>>>>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev);
>>>>> +#endif
>>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>>> b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>>>> index 4f84d8b893f1..a2881c90d187 100644
>>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
>>>>> @@ -37,6 +37,7 @@
>>>>>     #include <linux/math64.h>
>>>>>     #include <linux/seq_file.h>
>>>>>     #include <linux/firmware.h>
>>>>> +#include <legacy_dpm.h>
>>>>>
>>>>>     #define MC_CG_ARB_FREQ_F0           0x0a
>>>>>     #define MC_CG_ARB_FREQ_F1           0x0b
>>>>> @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs
>> si_dpm_funcs
>>>> = {
>>>>>     	.check_state_equal = &si_check_state_equal,
>>>>>     	.get_vce_clock_state = amdgpu_get_vce_clock_state,
>>>>>     	.read_sensor = &si_dpm_read_sensor,
>>>>> +	.change_power_state = amdgpu_dpm_change_power_state_locked,
>>>>>     };
>>>>>
>>>>>     static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
>>>>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv)
  2021-12-01  7:36           ` Lazar, Lijo
@ 2021-12-02  1:24             ` Quan, Evan
  0 siblings, 0 replies; 44+ messages in thread
From: Quan, Evan @ 2021-12-02  1:24 UTC (permalink / raw)
  To: Lazar, Lijo, amd-gfx; +Cc: Deucher, Alexander, Feng, Kenneth, Koenig, Christian

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Wednesday, December 1, 2021 3:37 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian
> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for those
> APIs used only by legacy ASICs(si/kv)
> 
> 
> 
> On 12/1/2021 12:47 PM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -----Original Message-----
> >> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> >> Sent: Wednesday, December 1, 2021 12:19 PM
> >> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig,
> Christian
> >> <Christian.Koenig@amd.com>; Feng, Kenneth <Kenneth.Feng@amd.com>
> >> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for
> those
> >> APIs used only by legacy ASICs(si/kv)
> >>
> >>
> >>
> >> On 12/1/2021 8:43 AM, Quan, Evan wrote:
> >>> [AMD Official Use Only]
> >>>
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> >>>> Sent: Tuesday, November 30, 2021 9:21 PM
> >>>> To: Quan, Evan <Evan.Quan@amd.com>; amd-
> gfx@lists.freedesktop.org
> >>>> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig,
> >> Christian
> >>>> <Christian.Koenig@amd.com>; Feng, Kenneth
> <Kenneth.Feng@amd.com>
> >>>> Subject: Re: [PATCH V2 07/17] drm/amd/pm: create a new holder for
> >> those
> >>>> APIs used only by legacy ASICs(si/kv)
> >>>>
> >>>>
> >>>>
> >>>> On 11/30/2021 1:12 PM, Evan Quan wrote:
> >>>>> Those APIs are used only by legacy ASICs(si/kv). They cannot be
> >>>>> shared by other ASICs. So, we create a new holder for them.
> >>>>>
> >>>>> Signed-off-by: Evan Quan <evan.quan@amd.com>
> >>>>> Change-Id: I555dfa37e783a267b1d3b3a7db5c87fcc3f1556f
> >>>>> --
> >>>>> v1->v2:
> >>>>>      - move other APIs used by si/kv in amdgpu_atombios.c to the new
> >>>>>        holder also(Alex)
> >>>>> ---
> >>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c  |  421 -----
> >>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h  |   30 -
> >>>>>     .../gpu/drm/amd/include/kgd_pp_interface.h    |    1 +
> >>>>>     drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 1008 +-----------
> >>>>>     drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   15 -
> >>>>>     drivers/gpu/drm/amd/pm/powerplay/Makefile     |    2 +-
> >>>>>     drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c     |    2 +
> >>>>>     drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c | 1453
> >>>> +++++++++++++++++
> >>>>>     drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h |   70 +
> >>>>>     drivers/gpu/drm/amd/pm/powerplay/si_dpm.c     |    2 +
> >>>>>     10 files changed, 1534 insertions(+), 1470 deletions(-)
> >>>>>     create mode 100644
> >> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>>>>     create mode 100644
> >> drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>>>>
> >>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>>>> index 12a6b1c99c93..f2e447212e62 100644
> >>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
> >>>>> @@ -1083,427 +1083,6 @@ int
> >>>> amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
> >>>>>     	return 0;
> >>>>>     }
> >>>>>
> >>>>> -int amdgpu_atombios_get_memory_pll_dividers(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> -					    u32 clock,
> >>>>> -					    bool strobe_mode,
> >>>>> -					    struct atom_mpll_param
> >>>> *mpll_param)
> >>>>> -{
> >>>>> -	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1
> args;
> >>>>> -	int index = GetIndexIntoMasterTable(COMMAND,
> >>>> ComputeMemoryClockParam);
> >>>>> -	u8 frev, crev;
> >>>>> -
> >>>>> -	memset(&args, 0, sizeof(args));
> >>>>> -	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> >>>>> -
> >>>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
> >>>>> mode_info.atom_context, index, &frev, &crev))
> >>>>> -		return -EINVAL;
> >>>>> -
> >>>>> -	switch (frev) {
> >>>>> -	case 2:
> >>>>> -		switch (crev) {
> >>>>> -		case 1:
> >>>>> -			/* SI */
> >>>>> -			args.ulClock = cpu_to_le32(clock);	/* 10
> khz */
> >>>>> -			args.ucInputFlag = 0;
> >>>>> -			if (strobe_mode)
> >>>>> -				args.ucInputFlag |=
> >>>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> >>>>> -
> >>>>> -			amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> -
> >>>>> -			mpll_param->clkfrac =
> >>>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> >>>>> -			mpll_param->clkf =
> >>>> le16_to_cpu(args.ulFbDiv.usFbDiv);
> >>>>> -			mpll_param->post_div = args.ucPostDiv;
> >>>>> -			mpll_param->dll_speed = args.ucDllSpeed;
> >>>>> -			mpll_param->bwcntl = args.ucBWCntl;
> >>>>> -			mpll_param->vco_mode =
> >>>>> -				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> >>>>> -			mpll_param->yclk_sel =
> >>>>> -				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> >>>>> -			mpll_param->qdr =
> >>>>> -				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> >>>>> -			mpll_param->half_rate =
> >>>>> -				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			return -EINVAL;
> >>>>> -		}
> >>>>> -		break;
> >>>>> -	default:
> >>>>> -		return -EINVAL;
> >>>>> -	}
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -void amdgpu_atombios_set_engine_dram_timings(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> -					     u32 eng_clock, u32
> mem_clock)
> >>>>> -{
> >>>>> -	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> >>>>> -	int index = GetIndexIntoMasterTable(COMMAND,
> >>>> DynamicMemorySettings);
> >>>>> -	u32 tmp;
> >>>>> -
> >>>>> -	memset(&args, 0, sizeof(args));
> >>>>> -
> >>>>> -	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> >>>>> -	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> >>>>> -
> >>>>> -	args.ulTargetEngineClock = cpu_to_le32(tmp);
> >>>>> -	if (mem_clock)
> >>>>> -		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> >>>> SET_CLOCK_FREQ_MASK);
> >>>>> -
> >>>>> -	amdgpu_atom_execute_table(adev-
> >mode_info.atom_context,
> >>>> index, (uint32_t *)&args);
> >>>>> -}
> >>>>> -
> >>>>> -void amdgpu_atombios_get_default_voltages(struct
> amdgpu_device
> >>>> *adev,
> >>>>> -					  u16 *vddc, u16 *vddci, u16
> *mvdd)
> >>>>> -{
> >>>>> -	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> -	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> >>>>> -	u8 frev, crev;
> >>>>> -	u16 data_offset;
> >>>>> -	union firmware_info *firmware_info;
> >>>>> -
> >>>>> -	*vddc = 0;
> >>>>> -	*vddci = 0;
> >>>>> -	*mvdd = 0;
> >>>>> -
> >>>>> -	if (amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> -				   &frev, &crev, &data_offset)) {
> >>>>> -		firmware_info =
> >>>>> -			(union firmware_info *)(mode_info-
> >atom_context-
> >>>>> bios +
> >>>>> -						data_offset);
> >>>>> -		*vddc = le16_to_cpu(firmware_info-
> >>>>> info_14.usBootUpVDDCVoltage);
> >>>>> -		if ((frev == 2) && (crev >= 2)) {
> >>>>> -			*vddci = le16_to_cpu(firmware_info-
> >>>>> info_22.usBootUpVDDCIVoltage);
> >>>>> -			*mvdd = le16_to_cpu(firmware_info-
> >>>>> info_22.usBootUpMVDDCVoltage);
> >>>>> -		}
> >>>>> -	}
> >>>>> -}
> >>>>> -
> >>>>> -union set_voltage {
> >>>>> -	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> >>>>> -	struct _SET_VOLTAGE_PARAMETERS v1;
> >>>>> -	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> >>>>> -	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> >>>>> -};
> >>>>> -
> >>>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> u8
> >>>> voltage_type,
> >>>>> -			     u16 voltage_id, u16 *voltage)
> >>>>> -{
> >>>>> -	union set_voltage args;
> >>>>> -	int index = GetIndexIntoMasterTable(COMMAND,
> SetVoltage);
> >>>>> -	u8 frev, crev;
> >>>>> -
> >>>>> -	if (!amdgpu_atom_parse_cmd_header(adev-
> >>>>> mode_info.atom_context, index, &frev, &crev))
> >>>>> -		return -EINVAL;
> >>>>> -
> >>>>> -	switch (crev) {
> >>>>> -	case 1:
> >>>>> -		return -EINVAL;
> >>>>> -	case 2:
> >>>>> -		args.v2.ucVoltageType =
> >>>> SET_VOLTAGE_GET_MAX_VOLTAGE;
> >>>>> -		args.v2.ucVoltageMode = 0;
> >>>>> -		args.v2.usVoltageLevel = 0;
> >>>>> -
> >>>>> -		amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> -
> >>>>> -		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> >>>>> -		break;
> >>>>> -	case 3:
> >>>>> -		args.v3.ucVoltageType = voltage_type;
> >>>>> -		args.v3.ucVoltageMode =
> ATOM_GET_VOLTAGE_LEVEL;
> >>>>> -		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> >>>>> -
> >>>>> -		amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> -
> >>>>> -		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> >>>>> -		break;
> >>>>> -	default:
> >>>>> -		DRM_ERROR("Unknown table version %d, %d\n",
> frev, crev);
> >>>>> -		return -EINVAL;
> >>>>> -	}
> >>>>> -
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -int
> >> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >>>> amdgpu_device *adev,
> >>>>> -						      u16 *voltage,
> >>>>> -						      u16 leakage_idx)
> >>>>> -{
> >>>>> -	return amdgpu_atombios_get_max_vddc(adev,
> >>>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> >>>>> -}
> >>>>> -
> >>>>> -union voltage_object_info {
> >>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> >>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> >>>>> -	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> >>>>> -};
> >>>>> -
> >>>>> -union voltage_object {
> >>>>> -	struct _ATOM_VOLTAGE_OBJECT v1;
> >>>>> -	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> >>>>> -	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> >>>>> -};
> >>>>> -
> >>>>> -
> >>>>> -static ATOM_VOLTAGE_OBJECT_V3
> >>>>
> >>
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> >>>> NFO_V3_1 *v3,
> >>>>> -
> 	u8
> >>>> voltage_type, u8 voltage_mode)
> >>>>> -{
> >>>>> -	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> >>>>> -	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> >>>> asVoltageObj[0]);
> >>>>> -	u8 *start = (u8 *)v3;
> >>>>> -
> >>>>> -	while (offset < size) {
> >>>>> -		ATOM_VOLTAGE_OBJECT_V3 *vo =
> >>>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> >>>>> -		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType
> ==
> >>>> voltage_type) &&
> >>>>> -		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode
> ==
> >>>> voltage_mode))
> >>>>> -			return vo;
> >>>>> -		offset += le16_to_cpu(vo-
> >>>>> asGpioVoltageObj.sHeader.usSize);
> >>>>> -	}
> >>>>> -	return NULL;
> >>>>> -}
> >>>>> -
> >>>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>>>> -			      u8 voltage_type,
> >>>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> >>>>> -{
> >>>>> -	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> -	u8 frev, crev;
> >>>>> -	u16 data_offset, size;
> >>>>> -	union voltage_object_info *voltage_info;
> >>>>> -	union voltage_object *voltage_object = NULL;
> >>>>> -
> >>>>> -	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> -				   &frev, &crev, &data_offset)) {
> >>>>> -		voltage_info = (union voltage_object_info *)
> >>>>> -			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> -
> >>>>> -		switch (frev) {
> >>>>> -		case 3:
> >>>>> -			switch (crev) {
> >>>>> -			case 1:
> >>>>> -				voltage_object = (union
> voltage_object *)
> >>>>> -
> >>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> -
> >>>> voltage_type,
> >>>>> -
> >>>> VOLTAGE_OBJ_SVID2);
> >>>>> -				if (voltage_object) {
> >>>>> -					*svd_gpio_id =
> voltage_object-
> >>>>> v3.asSVID2Obj.ucSVDGpioId;
> >>>>> -					*svc_gpio_id =
> voltage_object-
> >>>>> v3.asSVID2Obj.ucSVCGpioId;
> >>>>> -				} else {
> >>>>> -					return -EINVAL;
> >>>>> -				}
> >>>>> -				break;
> >>>>> -			default:
> >>>>> -				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> -				return -EINVAL;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> -			return -EINVAL;
> >>>>> -		}
> >>>>> -
> >>>>> -	}
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -bool
> >>>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>>>> -				u8 voltage_type, u8 voltage_mode)
> >>>>> -{
> >>>>> -	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> -	u8 frev, crev;
> >>>>> -	u16 data_offset, size;
> >>>>> -	union voltage_object_info *voltage_info;
> >>>>> -
> >>>>> -	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> -				   &frev, &crev, &data_offset)) {
> >>>>> -		voltage_info = (union voltage_object_info *)
> >>>>> -			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> -
> >>>>> -		switch (frev) {
> >>>>> -		case 3:
> >>>>> -			switch (crev) {
> >>>>> -			case 1:
> >>>>> -				if
> >>>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> -
> >>>> voltage_type, voltage_mode))
> >>>>> -					return true;
> >>>>> -				break;
> >>>>> -			default:
> >>>>> -				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> -				return false;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> -			return false;
> >>>>> -		}
> >>>>> -
> >>>>> -	}
> >>>>> -	return false;
> >>>>> -}
> >>>>> -
> >>>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> *adev,
> >>>>> -				      u8 voltage_type, u8
> voltage_mode,
> >>>>> -				      struct atom_voltage_table
> *voltage_table)
> >>>>> -{
> >>>>> -	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> -	u8 frev, crev;
> >>>>> -	u16 data_offset, size;
> >>>>> -	int i;
> >>>>> -	union voltage_object_info *voltage_info;
> >>>>> -	union voltage_object *voltage_object = NULL;
> >>>>> -
> >>>>> -	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> -				   &frev, &crev, &data_offset)) {
> >>>>> -		voltage_info = (union voltage_object_info *)
> >>>>> -			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> -
> >>>>> -		switch (frev) {
> >>>>> -		case 3:
> >>>>> -			switch (crev) {
> >>>>> -			case 1:
> >>>>> -				voltage_object = (union
> voltage_object *)
> >>>>> -
> >>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> -
> >>>> voltage_type, voltage_mode);
> >>>>> -				if (voltage_object) {
> >>>>> -
> 	ATOM_GPIO_VOLTAGE_OBJECT_V3
> >>>> *gpio =
> >>>>> -						&voltage_object-
> >>>>> v3.asGpioVoltageObj;
> >>>>> -					VOLTAGE_LUT_ENTRY_V2
> *lut;
> >>>>> -					if (gpio->ucGpioEntryNum >
> >>>> MAX_VOLTAGE_ENTRIES)
> >>>>> -						return -EINVAL;
> >>>>> -					lut = &gpio->asVolGpioLut[0];
> >>>>> -					for (i = 0; i < gpio-
> >ucGpioEntryNum;
> >>>> i++) {
> >>>>> -						voltage_table-
> >>>>> entries[i].value =
> >>>>> -
> 	le16_to_cpu(lut-
> >>>>> usVoltageValue);
> >>>>> -						voltage_table-
> >>>>> entries[i].smio_low =
> >>>>> -
> 	le32_to_cpu(lut-
> >>>>> ulVoltageId);
> >>>>> -						lut =
> >>>> (VOLTAGE_LUT_ENTRY_V2 *)
> >>>>> -							((u8 *)lut +
> >>>> sizeof(VOLTAGE_LUT_ENTRY_V2));
> >>>>> -					}
> >>>>> -					voltage_table->mask_low =
> >>>> le32_to_cpu(gpio->ulGpioMaskVal);
> >>>>> -					voltage_table->count = gpio-
> >>>>> ucGpioEntryNum;
> >>>>> -					voltage_table->phase_delay
> = gpio-
> >>>>> ucPhaseDelay;
> >>>>> -					return 0;
> >>>>> -				}
> >>>>> -				break;
> >>>>> -			default:
> >>>>> -				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> -				return -EINVAL;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> -			return -EINVAL;
> >>>>> -		}
> >>>>> -	}
> >>>>> -	return -EINVAL;
> >>>>> -}
> >>>>> -
> >>>>> -union vram_info {
> >>>>> -	struct _ATOM_VRAM_INFO_V3 v1_3;
> >>>>> -	struct _ATOM_VRAM_INFO_V4 v1_4;
> >>>>> -	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> >>>>> -};
> >>>>> -
> >>>>> -#define MEM_ID_MASK           0xff000000
> >>>>> -#define MEM_ID_SHIFT          24
> >>>>> -#define CLOCK_RANGE_MASK      0x00ffffff
> >>>>> -#define CLOCK_RANGE_SHIFT     0
> >>>>> -#define LOW_NIBBLE_MASK       0xf
> >>>>> -#define DATA_EQU_PREV         0
> >>>>> -#define DATA_FROM_TABLE       4
> >>>>> -
> >>>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> *adev,
> >>>>> -				      u8 module_index,
> >>>>> -				      struct atom_mc_reg_table
> *reg_table)
> >>>>> -{
> >>>>> -	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> >>>>> -	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> >>>>> -	u32 i = 0, j;
> >>>>> -	u16 data_offset, size;
> >>>>> -	union vram_info *vram_info;
> >>>>> -
> >>>>> -	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> >>>>> -
> >>>>> -	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> -				   &frev, &crev, &data_offset)) {
> >>>>> -		vram_info = (union vram_info *)
> >>>>> -			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> -		switch (frev) {
> >>>>> -		case 1:
> >>>>> -			DRM_ERROR("old table version %d, %d\n",
> frev,
> >>>> crev);
> >>>>> -			return -EINVAL;
> >>>>> -		case 2:
> >>>>> -			switch (crev) {
> >>>>> -			case 1:
> >>>>> -				if (module_index < vram_info-
> >>>>> v2_1.ucNumOfVRAMModule) {
> >>>>> -					ATOM_INIT_REG_BLOCK
> *reg_block
> >>>> =
> >>>>> -
> 	(ATOM_INIT_REG_BLOCK *)
> >>>>> -						((u8 *)vram_info +
> >>>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> >>>>> -
> >>>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> >>>>> -
> >>>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>>>> -						((u8 *)reg_block + (2
> *
> >>>> sizeof(u16)) +
> >>>>> -
> le16_to_cpu(reg_block-
> >>>>> usRegIndexTblSize));
> >>>>> -
> 	ATOM_INIT_REG_INDEX_FORMAT
> >>>> *format = &reg_block->asRegIndexBuf[0];
> >>>>> -					num_entries =
> >>>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> >>>>> -
> >>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> >>>>> -					if (num_entries >
> >>>> VBIOS_MC_REGISTER_ARRAY_SIZE)
> >>>>> -						return -EINVAL;
> >>>>> -					while (i < num_entries) {
> >>>>> -						if (format-
> >>>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
> >>>>> -							break;
> >>>>> -						reg_table-
> >>>>> mc_reg_address[i].s1 =
> >>>>> -
> >>>> 	(u16)(le16_to_cpu(format->usRegIndex));
> >>>>> -						reg_table-
> >>>>> mc_reg_address[i].pre_reg_data =
> >>>>> -							(u8)(format-
> >>>>> ucPreRegDataLength);
> >>>>> -						i++;
> >>>>> -						format =
> >>>> (ATOM_INIT_REG_INDEX_FORMAT *)
> >>>>> -							((u8 *)format
> +
> >>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> >>>>> -					}
> >>>>> -					reg_table->last = i;
> >>>>> -					while ((le32_to_cpu(*(u32
> >>>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> >>>>> -					       (num_ranges <
> >>>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> >>>>> -						t_mem_id =
> >>>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> >>>>> -								>>
> >>>> MEM_ID_SHIFT);
> >>>>> -						if (module_index ==
> >>>> t_mem_id) {
> >>>>> -							reg_table-
> >>>>> mc_reg_table_entry[num_ranges].mclk_max =
> >>>>> -
> >>>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> >>>>> -								      >>
> >>>> CLOCK_RANGE_SHIFT);
> >>>>> -							for (i = 0, j = 1;
> i <
> >>>> reg_table->last; i++) {
> >>>>> -								if
> ((reg_table-
> >>>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >>>> DATA_FROM_TABLE) {
> >>>>> -
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>>>> -
> >>>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> >>>>> -
> 	j++;
> >>>>> -								} else
> if
> >>>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK)
> ==
> >>>> DATA_EQU_PREV) {
> >>>>> -
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>>>> -
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> >>>>> -								}
> >>>>> -							}
> >>>>> -
> 	num_ranges++;
> >>>>> -						}
> >>>>> -						reg_data =
> >>>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>>>> -							((u8
> *)reg_data +
> >>>> le16_to_cpu(reg_block->usRegDataBlkSize));
> >>>>> -					}
> >>>>> -					if (le32_to_cpu(*(u32
> *)reg_data) !=
> >>>> END_OF_REG_DATA_BLOCK)
> >>>>> -						return -EINVAL;
> >>>>> -					reg_table->num_entries =
> >>>> num_ranges;
> >>>>> -				} else
> >>>>> -					return -EINVAL;
> >>>>> -				break;
> >>>>> -			default:
> >>>>> -				DRM_ERROR("Unknown table
> >>>> version %d, %d\n", frev, crev);
> >>>>> -				return -EINVAL;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			DRM_ERROR("Unknown table
> version %d, %d\n",
> >>>> frev, crev);
> >>>>> -			return -EINVAL;
> >>>>> -		}
> >>>>> -		return 0;
> >>>>> -	}
> >>>>> -	return -EINVAL;
> >>>>> -}
> >>>>> -
> >>>>>     bool amdgpu_atombios_has_gpu_virtualization_table(struct
> >>>> amdgpu_device *adev)
> >>>>>     {
> >>>>>     	int index = GetIndexIntoMasterTable(DATA,
> GPUVirtualizationInfo);
> >>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>>>> index 27e74b1fc260..cb5649298dcb 100644
> >>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
> >>>>> @@ -160,26 +160,6 @@ int
> >> amdgpu_atombios_get_clock_dividers(struct
> >>>> amdgpu_device *adev,
> >>>>>     				       bool strobe_mode,
> >>>>>     				       struct atom_clock_dividers
> *dividers);
> >>>>>
> >>>>> -int amdgpu_atombios_get_memory_pll_dividers(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> -					    u32 clock,
> >>>>> -					    bool strobe_mode,
> >>>>> -					    struct atom_mpll_param
> >>>> *mpll_param);
> >>>>> -
> >>>>> -void amdgpu_atombios_set_engine_dram_timings(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> -					     u32 eng_clock, u32
> mem_clock);
> >>>>> -
> >>>>> -bool
> >>>>> -amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>>>> -				u8 voltage_type, u8 voltage_mode);
> >>>>> -
> >>>>> -int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> *adev,
> >>>>> -				      u8 voltage_type, u8
> voltage_mode,
> >>>>> -				      struct atom_voltage_table
> >>>> *voltage_table);
> >>>>> -
> >>>>> -int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> *adev,
> >>>>> -				      u8 module_index,
> >>>>> -				      struct atom_mc_reg_table
> *reg_table);
> >>>>> -
> >>>>>     bool amdgpu_atombios_has_gpu_virtualization_table(struct
> >>>> amdgpu_device *adev);
> >>>>>
> >>>>>     void amdgpu_atombios_scratch_regs_lock(struct amdgpu_device
> >> *adev,
> >>>> bool lock);
> >>>>> @@ -190,21 +170,11 @@ void
> >>>> amdgpu_atombios_scratch_regs_set_backlight_level(struct
> >> amdgpu_device
> >>>> *adev
> >>>>>     bool amdgpu_atombios_scratch_need_asic_init(struct
> amdgpu_device
> >>>> *adev);
> >>>>>
> >>>>>     void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8
> num_bytes,
> >> bool
> >>>> to_le);
> >>>>> -int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> u8
> >>>> voltage_type,
> >>>>> -			     u16 voltage_id, u16 *voltage);
> >>>>> -int
> >> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >>>> amdgpu_device *adev,
> >>>>> -						      u16 *voltage,
> >>>>> -						      u16 leakage_idx);
> >>>>> -void amdgpu_atombios_get_default_voltages(struct
> amdgpu_device
> >>>> *adev,
> >>>>> -					  u16 *vddc, u16 *vddci, u16
> *mvdd);
> >>>>>     int amdgpu_atombios_get_clock_dividers(struct amdgpu_device
> >> *adev,
> >>>>>     				       u8 clock_type,
> >>>>>     				       u32 clock,
> >>>>>     				       bool strobe_mode,
> >>>>>     				       struct atom_clock_dividers
> *dividers);
> >>>>> -int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>>>> -			      u8 voltage_type,
> >>>>> -			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> >>>>>
> >>>>>     int amdgpu_atombios_get_data_table(struct amdgpu_device
> *adev,
> >>>>>     				   uint32_t table,
> >>>>
> >>>>
> >>>> Whether used in legacy or new logic, atombios table parsing/execution
> >>>> should be kept as separate logic. These shouldn't be moved along with
> >> dpm.
> >>> [Quan, Evan] Are you suggesting another place holder for those
> atombios
> >> APIs? Like legacy_atombios.c?
> >>
> >> What I meant is no need to move them, keep it in the same file. We also
> >> have atomfirmware, splitting this and adding another legacy_atombios is
> >> not required.
> > [Quan, Evan] Hmm, that seems contrary to Alex' suggestions.
> > Although I'm fine with either. I kind of prefer Alex's suggestions.
> > That is if they are destined to be dropped(together with SI/KV support),
> we should get them separated now.
> >
> 
> Hmm, that is not the way the code is structured currently. We don't keep
> as atombios_powerplay.c or atomfirmware_smu.c. The logic related to
> atombios is kept in single place. We could mark these as legacy APIs
> such that they get dropped whenever KV/SI support is dropped.
[Quan, Evan] OK.. So how about this? These atombios related APIs will be dropped from this patch.
And I will create another patch to wrap them under the guard "#ifdef CONFIG_DRM_AMDGPU_SI"(since I found they are all used by si_dpm.c only).
So that in future they can get dropped together with SI support.

BR
Evan
> 
> Thanks,
> Lijo
> 
> 
> > BR
> > Evan
> >>
> >>>>
> >>>>
> >>>>> diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>>> b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>>>> index 2e295facd086..cdf724dcf832 100644
> >>>>> --- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>>>> +++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
> >>>>> @@ -404,6 +404,7 @@ struct amd_pm_funcs {
> >>>>>     	int (*get_dpm_clock_table)(void *handle,
> >>>>>     				   struct dpm_clocks *clock_table);
> >>>>>     	int (*get_smu_prv_buf_details)(void *handle, void **addr,
> size_t
> >>>> *size);
> >>>>> +	int (*change_power_state)(void *handle);
> >>>>>     };
> >>>>>
> >>>>>     struct metrics_table_header {
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>>>> index ecaf0081bc31..c6801d10cde6 100644
> >>>>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>>>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>>>> @@ -34,113 +34,9 @@
> >>>>>
> >>>>>     #define WIDTH_4K 3840
> >>>>>
> >>>>> -#define amdgpu_dpm_pre_set_power_state(adev) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >>>>> pre_set_power_state((adev)->powerplay.pp_handle))
> >>>>> -
> >>>>> -#define amdgpu_dpm_post_set_power_state(adev) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >>>>> post_set_power_state((adev)->powerplay.pp_handle))
> >>>>> -
> >>>>> -#define amdgpu_dpm_display_configuration_changed(adev) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >>>>> display_configuration_changed((adev)->powerplay.pp_handle))
> >>>>> -
> >>>>> -#define amdgpu_dpm_print_power_state(adev, ps) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >print_power_state((adev)-
> >>>>> powerplay.pp_handle, (ps)))
> >>>>> -
> >>>>> -#define amdgpu_dpm_vblank_too_short(adev) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >vblank_too_short((adev)-
> >>>>> powerplay.pp_handle))
> >>>>> -
> >>>>>     #define amdgpu_dpm_enable_bapm(adev, e) \
> >>>>>     		((adev)->powerplay.pp_funcs-
> >enable_bapm((adev)-
> >>>>> powerplay.pp_handle, (e)))
> >>>>>
> >>>>> -#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> >>>>> -		((adev)->powerplay.pp_funcs-
> >check_state_equal((adev)-
> >>>>> powerplay.pp_handle, (cps), (rps), (equal)))
> >>>>> -
> >>>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> >>>>> -{
> >>>>> -	const char *s;
> >>>>> -
> >>>>> -	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> >>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> >>>>> -	default:
> >>>>> -		s = "none";
> >>>>> -		break;
> >>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> >>>>> -		s = "battery";
> >>>>> -		break;
> >>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> >>>>> -		s = "balanced";
> >>>>> -		break;
> >>>>> -	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> >>>>> -		s = "performance";
> >>>>> -		break;
> >>>>> -	}
> >>>>> -	printk("\tui class: %s\n", s);
> >>>>> -	printk("\tinternal class:");
> >>>>> -	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0)
> &&
> >>>>> -	    (class2 == 0))
> >>>>> -		pr_cont(" none");
> >>>>> -	else {
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> >>>>> -			pr_cont(" boot");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>>>> -			pr_cont(" thermal");
> >>>>> -		if (class &
> >>>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> >>>>> -			pr_cont(" limited_pwr");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> >>>>> -			pr_cont(" rest");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> >>>>> -			pr_cont(" forced");
> >>>>> -		if (class &
> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>>>> -			pr_cont(" 3d_perf");
> >>>>> -		if (class &
> >>>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> >>>>> -			pr_cont(" ovrdrv");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> >>>>> -			pr_cont(" uvd");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> >>>>> -			pr_cont(" 3d_low");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>>>> -			pr_cont(" acpi");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>>>> -			pr_cont(" uvd_hd2");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>>>> -			pr_cont(" uvd_hd");
> >>>>> -		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>>>> -			pr_cont(" uvd_sd");
> >>>>> -		if (class2 &
> >>>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> >>>>> -			pr_cont(" limited_pwr2");
> >>>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>>>> -			pr_cont(" ulv");
> >>>>> -		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>>>> -			pr_cont(" uvd_mvc");
> >>>>> -	}
> >>>>> -	pr_cont("\n");
> >>>>> -}
> >>>>> -
> >>>>> -void amdgpu_dpm_print_cap_info(u32 caps)
> >>>>> -{
> >>>>> -	printk("\tcaps:");
> >>>>> -	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> >>>>> -		pr_cont(" single_disp");
> >>>>> -	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> >>>>> -		pr_cont(" video");
> >>>>> -	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> >>>>> -		pr_cont(" no_dc");
> >>>>> -	pr_cont("\n");
> >>>>> -}
> >>>>> -
> >>>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>>>> -				struct amdgpu_ps *rps)
> >>>>> -{
> >>>>> -	printk("\tstatus:");
> >>>>> -	if (rps == adev->pm.dpm.current_ps)
> >>>>> -		pr_cont(" c");
> >>>>> -	if (rps == adev->pm.dpm.requested_ps)
> >>>>> -		pr_cont(" r");
> >>>>> -	if (rps == adev->pm.dpm.boot_ps)
> >>>>> -		pr_cont(" b");
> >>>>> -	pr_cont("\n");
> >>>>> -}
> >>>>> -
> >>>>>     static void amdgpu_dpm_get_active_displays(struct
> amdgpu_device
> >>>> *adev)
> >>>>>     {
> >>>>>     	struct drm_device *ddev = adev_to_drm(adev);
> >>>>> @@ -161,7 +57,6 @@ static void
> >> amdgpu_dpm_get_active_displays(struct
> >>>> amdgpu_device *adev)
> >>>>>     	}
> >>>>>     }
> >>>>>
> >>>>> -
> >>>>>     u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev)
> >>>>>     {
> >>>>>     	struct drm_device *dev = adev_to_drm(adev);
> >>>>> @@ -209,679 +104,6 @@ static u32
> amdgpu_dpm_get_vrefresh(struct
> >>>> amdgpu_device *adev)
> >>>>>     	return vrefresh;
> >>>>>     }
> >>>>>
> >>>>> -union power_info {
> >>>>> -	struct _ATOM_POWERPLAY_INFO info;
> >>>>> -	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> >>>>> -	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> >>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> >>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> >>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> >>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> >>>>> -	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> >>>>> -};
> >>>>> -
> >>>>> -union fan_info {
> >>>>> -	struct _ATOM_PPLIB_FANTABLE fan;
> >>>>> -	struct _ATOM_PPLIB_FANTABLE2 fan2;
> >>>>> -	struct _ATOM_PPLIB_FANTABLE3 fan3;
> >>>>> -};
> >>>>> -
> >>>>> -static int amdgpu_parse_clk_voltage_dep_table(struct
> >>>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> >>>>> -
> >>>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> >>>>> -{
> >>>>> -	u32 size = atom_table->ucNumEntries *
> >>>>> -		sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> >>>>> -	int i;
> >>>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> >>>>> -
> >>>>> -	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> >>>>> -	if (!amdgpu_table->entries)
> >>>>> -		return -ENOMEM;
> >>>>> -
> >>>>> -	entry = &atom_table->entries[0];
> >>>>> -	for (i = 0; i < atom_table->ucNumEntries; i++) {
> >>>>> -		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >>>>> usClockLow) |
> >>>>> -			(entry->ucClockHigh << 16);
> >>>>> -		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >>>>> usVoltage);
> >>>>> -		entry =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> >>>> *)
> >>>>> -			((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> >>>>> -	}
> >>>>> -	amdgpu_table->count = atom_table->ucNumEntries;
> >>>>> -
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> >>>>> -{
> >>>>> -	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> -	union power_info *power_info;
> >>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> -	u16 data_offset;
> >>>>> -	u8 frev, crev;
> >>>>> -
> >>>>> -	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> -				   &frev, &crev, &data_offset))
> >>>>> -		return -EINVAL;
> >>>>> -	power_info = (union power_info *)(mode_info-
> >atom_context-
> >>>>> bios + data_offset);
> >>>>> -
> >>>>> -	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >>>>> pplib.ulPlatformCaps);
> >>>>> -	adev->pm.dpm.backbias_response_time =
> >>>> le16_to_cpu(power_info->pplib.usBackbiasTime);
> >>>>> -	adev->pm.dpm.voltage_response_time =
> le16_to_cpu(power_info-
> >>>>> pplib.usVoltageTime);
> >>>>> -
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> >>>>> -#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> >>>>> -
> >>>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
> >> *adev)
> >>>>> -{
> >>>>> -	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> -	union power_info *power_info;
> >>>>> -	union fan_info *fan_info;
> >>>>> -	ATOM_PPLIB_Clock_Voltage_Dependency_Table
> *dep_table;
> >>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> -	u16 data_offset;
> >>>>> -	u8 frev, crev;
> >>>>> -	int ret, i;
> >>>>> -
> >>>>> -	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> -				   &frev, &crev, &data_offset))
> >>>>> -		return -EINVAL;
> >>>>> -	power_info = (union power_info *)(mode_info-
> >atom_context-
> >>>>> bios + data_offset);
> >>>>> -
> >>>>> -	/* fan table */
> >>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>>>> -		if (power_info->pplib3.usFanTableOffset) {
> >>>>> -			fan_info = (union fan_info *)(mode_info-
> >>>>> atom_context->bios + data_offset +
> >>>>> -
> le16_to_cpu(power_info-
> >>>>> pplib3.usFanTableOffset));
> >>>>> -			adev->pm.dpm.fan.t_hyst = fan_info-
> >fan.ucTHyst;
> >>>>> -			adev->pm.dpm.fan.t_min =
> le16_to_cpu(fan_info-
> >>>>> fan.usTMin);
> >>>>> -			adev->pm.dpm.fan.t_med =
> le16_to_cpu(fan_info-
> >>>>> fan.usTMed);
> >>>>> -			adev->pm.dpm.fan.t_high =
> le16_to_cpu(fan_info-
> >>>>> fan.usTHigh);
> >>>>> -			adev->pm.dpm.fan.pwm_min =
> >>>> le16_to_cpu(fan_info->fan.usPWMMin);
> >>>>> -			adev->pm.dpm.fan.pwm_med =
> >>>> le16_to_cpu(fan_info->fan.usPWMMed);
> >>>>> -			adev->pm.dpm.fan.pwm_high =
> >>>> le16_to_cpu(fan_info->fan.usPWMHigh);
> >>>>> -			if (fan_info->fan.ucFanTableFormat >= 2)
> >>>>> -				adev->pm.dpm.fan.t_max =
> >>>> le16_to_cpu(fan_info->fan2.usTMax);
> >>>>> -			else
> >>>>> -				adev->pm.dpm.fan.t_max = 10900;
> >>>>> -			adev->pm.dpm.fan.cycle_delay = 100000;
> >>>>> -			if (fan_info->fan.ucFanTableFormat >= 3) {
> >>>>> -				adev->pm.dpm.fan.control_mode =
> >>>> fan_info->fan3.ucFanControlMode;
> >>>>> -				adev-
> >pm.dpm.fan.default_max_fan_pwm
> >>>> =
> >>>>> -					le16_to_cpu(fan_info-
> >>>>> fan3.usFanPWMMax);
> >>>>> -				adev-
> >>>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
> >>>>> -				adev-
> >pm.dpm.fan.fan_output_sensitivity =
> >>>>> -					le16_to_cpu(fan_info-
> >>>>> fan3.usFanOutputSensitivity);
> >>>>> -			}
> >>>>> -			adev->pm.dpm.fan.ucode_fan_control =
> true;
> >>>>> -		}
> >>>>> -	}
> >>>>> -
> >>>>> -	/* clock dependancy tables, shedding tables */
> >>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> >>>>> -		if (power_info-
> >pplib4.usVddcDependencyOnSCLKOffset) {
> >>>>> -			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcDependencyOnSCLKOffset));
> >>>>> -			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
> >>>>> -
> dep_table);
> >>>>> -			if (ret) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return ret;
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if (power_info-
> >pplib4.usVddciDependencyOnMCLKOffset) {
> >>>>> -			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddciDependencyOnMCLKOffset));
> >>>>> -			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
> >>>>> -
> dep_table);
> >>>>> -			if (ret) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return ret;
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if (power_info-
> >pplib4.usVddcDependencyOnMCLKOffset) {
> >>>>> -			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcDependencyOnMCLKOffset));
> >>>>> -			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
> >>>>> -
> dep_table);
> >>>>> -			if (ret) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return ret;
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if (power_info-
> >pplib4.usMvddDependencyOnMCLKOffset)
> >>>> {
> >>>>> -			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usMvddDependencyOnMCLKOffset));
> >>>>> -			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> >>>>> -
> dep_table);
> >>>>> -			if (ret) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return ret;
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if (power_info-
> >pplib4.usMaxClockVoltageOnDCOffset) {
> >>>>> -			ATOM_PPLIB_Clock_Voltage_Limit_Table
> *clk_v =
> >>>>> -
> 	(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usMaxClockVoltageOnDCOffset));
> >>>>> -			if (clk_v->ucNumEntries) {
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> >>>>> -					le16_to_cpu(clk_v-
> >>>>> entries[0].usSclkLow) |
> >>>>> -					(clk_v->entries[0].ucSclkHigh
> << 16);
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> >>>>> -					le16_to_cpu(clk_v-
> >>>>> entries[0].usMclkLow) |
> >>>>> -					(clk_v-
> >entries[0].ucMclkHigh << 16);
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> >>>>> -					le16_to_cpu(clk_v-
> >>>>> entries[0].usVddc);
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> >>>>> -					le16_to_cpu(clk_v-
> >>>>> entries[0].usVddci);
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if (power_info-
> >pplib4.usVddcPhaseShedLimitsTableOffset)
> >>>> {
> >>>>> -			ATOM_PPLIB_PhaseSheddingLimits_Table
> *psl =
> >>>>> -
> 	(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcPhaseShedLimitsTableOffset));
> >>>>> -			ATOM_PPLIB_PhaseSheddingLimits_Record
> *entry;
> >>>>> -
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> >>>>> -				kcalloc(psl->ucNumEntries,
> >>>>> -					sizeof(struct
> >>>> amdgpu_phase_shedding_limits_entry),
> >>>>> -					GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -
> >>>>> -			entry = &psl->entries[0];
> >>>>> -			for (i = 0; i < psl->ucNumEntries; i++) {
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> >>>>> -					le16_to_cpu(entry-
> >usSclkLow) |
> >>>> (entry->ucSclkHigh << 16);
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> >>>>> -					le16_to_cpu(entry-
> >usMclkLow) |
> >>>> (entry->ucMclkHigh << 16);
> >>>>> -				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> >>>>> -					le16_to_cpu(entry-
> >usVoltage);
> >>>>> -				entry =
> >>>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> >>>>> -			}
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
> >>>>> -				psl->ucNumEntries;
> >>>>> -		}
> >>>>> -	}
> >>>>> -
> >>>>> -	/* cac data */
> >>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> >>>>> -		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >>>>> pplib5.ulTDPLimit);
> >>>>> -		adev->pm.dpm.near_tdp_limit =
> le32_to_cpu(power_info-
> >>>>> pplib5.ulNearTDPLimit);
> >>>>> -		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >>>>> pm.dpm.near_tdp_limit;
> >>>>> -		adev->pm.dpm.tdp_od_limit =
> le16_to_cpu(power_info-
> >>>>> pplib5.usTDPODLimit);
> >>>>> -		if (adev->pm.dpm.tdp_od_limit)
> >>>>> -			adev->pm.dpm.power_control = true;
> >>>>> -		else
> >>>>> -			adev->pm.dpm.power_control = false;
> >>>>> -		adev->pm.dpm.tdp_adjustment = 0;
> >>>>> -		adev->pm.dpm.sq_ramping_threshold =
> >>>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> >>>>> -		adev->pm.dpm.cac_leakage =
> le32_to_cpu(power_info-
> >>>>> pplib5.ulCACLeakage);
> >>>>> -		adev->pm.dpm.load_line_slope =
> le16_to_cpu(power_info-
> >>>>> pplib5.usLoadLineSlope);
> >>>>> -		if (power_info->pplib5.usCACLeakageTableOffset) {
> >>>>> -			ATOM_PPLIB_CAC_Leakage_Table
> *cac_table =
> >>>>> -				(ATOM_PPLIB_CAC_Leakage_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(power_info-
> >>>>> pplib5.usCACLeakageTableOffset));
> >>>>> -			ATOM_PPLIB_CAC_Leakage_Record *entry;
> >>>>> -			u32 size = cac_table->ucNumEntries *
> sizeof(struct
> >>>> amdgpu_cac_leakage_table);
> >>>>> -			adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries
> >>>> = kzalloc(size, GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			entry = &cac_table->entries[0];
> >>>>> -			for (i = 0; i < cac_table->ucNumEntries; i++) {
> >>>>> -				if (adev->pm.dpm.platform_caps &
> >>>> ATOM_PP_PLATFORM_CAP_EVV) {
> >>>>> -					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> >>>>> -						le16_to_cpu(entry-
> >>>>> usVddc1);
> >>>>> -					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> >>>>> -						le16_to_cpu(entry-
> >>>>> usVddc2);
> >>>>> -					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> >>>>> -						le16_to_cpu(entry-
> >>>>> usVddc3);
> >>>>> -				} else {
> >>>>> -					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> >>>>> -						le16_to_cpu(entry-
> >usVddc);
> >>>>> -					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> >>>>> -						le32_to_cpu(entry-
> >>>>> ulLeakageValue);
> >>>>> -				}
> >>>>> -				entry =
> (ATOM_PPLIB_CAC_Leakage_Record
> >>>> *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> >>>>> -			}
> >>>>> -			adev-
> >pm.dpm.dyn_state.cac_leakage_table.count
> >>>> = cac_table->ucNumEntries;
> >>>>> -		}
> >>>>> -	}
> >>>>> -
> >>>>> -	/* ext tables */
> >>>>> -	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> -	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>>>> -		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> >>>> (ATOM_PPLIB_EXTENDEDHEADER *)
> >>>>> -			(mode_info->atom_context->bios +
> data_offset +
> >>>>> -			 le16_to_cpu(power_info-
> >>>>> pplib3.usExtendendedHeaderOffset));
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> >>>>> -			ext_hdr->usVCETableOffset) {
> >>>>> -			VCEClockInfoArray *array =
> (VCEClockInfoArray *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1);
> >>>>> -
> 	ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> >>>> *limits =
> >>>>> -
> >>>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1 +
> >>>>> -				 1 + array->ucNumEntries *
> >>>> sizeof(VCEClockInfo));
> >>>>> -			ATOM_PPLIB_VCE_State_Table *states =
> >>>>> -				(ATOM_PPLIB_VCE_State_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1 +
> >>>>> -				 1 + (array->ucNumEntries * sizeof
> >>>> (VCEClockInfo)) +
> >>>>> -				 1 + (limits->numEntries *
> >>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> >>>>> -
> 	ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> >>>> *entry;
> >>>>> -			ATOM_PPLIB_VCE_State_Record
> *state_entry;
> >>>>> -			VCEClockInfo *vce_clk;
> >>>>> -			u32 size = limits->numEntries *
> >>>>> -				sizeof(struct
> >>>> amdgpu_vce_clock_voltage_dependency_entry);
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> >>>>> -				kzalloc(size, GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> >>>>> -				limits->numEntries;
> >>>>> -			entry = &limits->entries[0];
> >>>>> -			state_entry = &states->entries[0];
> >>>>> -			for (i = 0; i < limits->numEntries; i++) {
> >>>>> -				vce_clk = (VCEClockInfo *)
> >>>>> -					((u8 *)&array->entries[0] +
> >>>>> -					 (entry-
> >ucVCEClockInfoIndex *
> >>>> sizeof(VCEClockInfo)));
> >>>>> -				adev-
> >>>>>
> >>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> >>>> =
> >>>>> -					le16_to_cpu(vce_clk-
> >usEVClkLow) |
> >>>> (vce_clk->ucEVClkHigh << 16);
> >>>>> -				adev-
> >>>>>
> >>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> >>>> =
> >>>>> -					le16_to_cpu(vce_clk-
> >usECClkLow) |
> >>>> (vce_clk->ucECClkHigh << 16);
> >>>>> -				adev-
> >>>>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> >>>>> -					le16_to_cpu(entry-
> >usVoltage);
> >>>>> -				entry =
> >>>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> >>>>> -			}
> >>>>> -			adev->pm.dpm.num_of_vce_states =
> >>>>> -					states->numEntries >
> >>>> AMD_MAX_VCE_LEVELS ?
> >>>>> -					AMD_MAX_VCE_LEVELS :
> states-
> >>>>> numEntries;
> >>>>> -			for (i = 0; i < adev-
> >pm.dpm.num_of_vce_states; i++)
> >>>> {
> >>>>> -				vce_clk = (VCEClockInfo *)
> >>>>> -					((u8 *)&array->entries[0] +
> >>>>> -					 (state_entry-
> >ucVCEClockInfoIndex
> >>>> * sizeof(VCEClockInfo)));
> >>>>> -				adev->pm.dpm.vce_states[i].evclk =
> >>>>> -					le16_to_cpu(vce_clk-
> >usEVClkLow) |
> >>>> (vce_clk->ucEVClkHigh << 16);
> >>>>> -				adev->pm.dpm.vce_states[i].ecclk =
> >>>>> -					le16_to_cpu(vce_clk-
> >usECClkLow) |
> >>>> (vce_clk->ucECClkHigh << 16);
> >>>>> -				adev->pm.dpm.vce_states[i].clk_idx
> =
> >>>>> -					state_entry-
> >ucClockInfoIndex &
> >>>> 0x3f;
> >>>>> -				adev->pm.dpm.vce_states[i].pstate
> =
> >>>>> -					(state_entry-
> >ucClockInfoIndex &
> >>>> 0xc0) >> 6;
> >>>>> -				state_entry =
> >>>> (ATOM_PPLIB_VCE_State_Record *)
> >>>>> -					((u8 *)state_entry +
> >>>> sizeof(ATOM_PPLIB_VCE_State_Record));
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> >>>>> -			ext_hdr->usUVDTableOffset) {
> >>>>> -			UVDClockInfoArray *array =
> (UVDClockInfoArray *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usUVDTableOffset) +
> >>>> 1);
> >>>>> -
> 	ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> >>>> *limits =
> >>>>> -
> >>>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usUVDTableOffset) +
> >>>> 1 +
> >>>>> -				 1 + (array->ucNumEntries * sizeof
> >>>> (UVDClockInfo)));
> >>>>> -
> 	ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> >>>> *entry;
> >>>>> -			u32 size = limits->numEntries *
> >>>>> -				sizeof(struct
> >>>> amdgpu_uvd_clock_voltage_dependency_entry);
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> >>>>> -				kzalloc(size, GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> >>>>> -				limits->numEntries;
> >>>>> -			entry = &limits->entries[0];
> >>>>> -			for (i = 0; i < limits->numEntries; i++) {
> >>>>> -				UVDClockInfo *uvd_clk =
> (UVDClockInfo *)
> >>>>> -					((u8 *)&array->entries[0] +
> >>>>> -					 (entry-
> >ucUVDClockInfoIndex *
> >>>> sizeof(UVDClockInfo)));
> >>>>> -				adev-
> >>>>>
> >> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk
> =
> >>>>> -					le16_to_cpu(uvd_clk-
> >usVClkLow) |
> >>>> (uvd_clk->ucVClkHigh << 16);
> >>>>> -				adev-
> >>>>>
> >> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk
> =
> >>>>> -					le16_to_cpu(uvd_clk-
> >usDClkLow) |
> >>>> (uvd_clk->ucDClkHigh << 16);
> >>>>> -				adev-
> >>>>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> >>>>> -					le16_to_cpu(entry-
> >usVoltage);
> >>>>> -				entry =
> >>>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> >>>>> -			ext_hdr->usSAMUTableOffset) {
> >>>>> -			ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> *limits =
> >>>>> -
> 	(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> >>>> *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usSAMUTableOffset)
> >>>> + 1);
> >>>>> -
> 	ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> >>>>> -			u32 size = limits->numEntries *
> >>>>> -				sizeof(struct
> >>>> amdgpu_clock_voltage_dependency_entry);
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries
> =
> >>>>> -				kzalloc(size, GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries)
> {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> >>>>> -				limits->numEntries;
> >>>>> -			entry = &limits->entries[0];
> >>>>> -			for (i = 0; i < limits->numEntries; i++) {
> >>>>> -				adev-
> >>>>>
> >> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk
> =
> >>>>> -					le16_to_cpu(entry-
> >usSAMClockLow)
> >>>> | (entry->ucSAMClockHigh << 16);
> >>>>> -				adev-
> >>>>>
> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
> >> =
> >>>>> -					le16_to_cpu(entry-
> >usVoltage);
> >>>>> -				entry =
> >>>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> >>>>> -		    ext_hdr->usPPMTableOffset) {
> >>>>> -			ATOM_PPLIB_PPM_Table *ppm =
> >>>> (ATOM_PPLIB_PPM_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usPPMTableOffset));
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table =
> >>>>> -				kzalloc(sizeof(struct
> amdgpu_ppm_table),
> >>>> GFP_KERNEL);
> >>>>> -			if (!adev->pm.dpm.dyn_state.ppm_table) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >ppm_design
> >>>> = ppm->ucPpmDesign;
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> cpu_core_number =
> >>>>> -				le16_to_cpu(ppm-
> >usCpuCoreNumber);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> platform_tdp =
> >>>>> -				le32_to_cpu(ppm->ulPlatformTDP);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> small_ac_platform_tdp =
> >>>>> -				le32_to_cpu(ppm-
> >ulSmallACPlatformTDP);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >platform_tdc
> >>>> =
> >>>>> -				le32_to_cpu(ppm->ulPlatformTDC);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> small_ac_platform_tdc =
> >>>>> -				le32_to_cpu(ppm-
> >ulSmallACPlatformTDC);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >apu_tdp =
> >>>>> -				le32_to_cpu(ppm->ulApuTDP);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >dgpu_tdp =
> >>>>> -				le32_to_cpu(ppm->ulDGpuTDP);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> dgpu_ulv_power =
> >>>>> -				le32_to_cpu(ppm-
> >ulDGpuUlvPower);
> >>>>> -			adev->pm.dpm.dyn_state.ppm_table-
> >tj_max =
> >>>>> -				le32_to_cpu(ppm->ulTjmax);
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> >>>>> -			ext_hdr->usACPTableOffset) {
> >>>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> *limits =
> >>>>> -
> 	(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> >>>> *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >usACPTableOffset) +
> >>>> 1);
> >>>>> -			ATOM_PPLIB_ACPClk_Voltage_Limit_Record
> *entry;
> >>>>> -			u32 size = limits->numEntries *
> >>>>> -				sizeof(struct
> >>>> amdgpu_clock_voltage_dependency_entry);
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> >>>>> -				kzalloc(size, GFP_KERNEL);
> >>>>> -			if (!adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> >>>>> -				limits->numEntries;
> >>>>> -			entry = &limits->entries[0];
> >>>>> -			for (i = 0; i < limits->numEntries; i++) {
> >>>>> -				adev-
> >>>>>
> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
> >> =
> >>>>> -					le16_to_cpu(entry-
> >usACPClockLow)
> >>>> | (entry->ucACPClockHigh << 16);
> >>>>> -				adev-
> >>>>>
> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> >>>>> -					le16_to_cpu(entry-
> >usVoltage);
> >>>>> -				entry =
> >>>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> >>>>> -					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> >>>>> -			}
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> >>>>> -			ext_hdr->usPowerTuneTableOffset) {
> >>>>> -			u8 rev = *(u8 *)(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> -					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> -			ATOM_PowerTune_Table *pt;
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table =
> >>>>> -				kzalloc(sizeof(struct
> amdgpu_cac_tdp_table),
> >>>> GFP_KERNEL);
> >>>>> -			if (!adev->pm.dpm.dyn_state.cac_tdp_table)
> {
> >>>>> -
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> -				return -ENOMEM;
> >>>>> -			}
> >>>>> -			if (rev > 0) {
> >>>>> -
> 	ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> >>>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> >>>>> -					(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> -					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> -				adev-
> >pm.dpm.dyn_state.cac_tdp_table-
> >>>>> maximum_power_delivery_limit =
> >>>>> -					ppt-
> >usMaximumPowerDeliveryLimit;
> >>>>> -				pt = &ppt->power_tune_table;
> >>>>> -			} else {
> >>>>> -				ATOM_PPLIB_POWERTUNE_Table
> *ppt =
> >>>> (ATOM_PPLIB_POWERTUNE_Table *)
> >>>>> -					(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> -					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> -				adev-
> >pm.dpm.dyn_state.cac_tdp_table-
> >>>>> maximum_power_delivery_limit = 255;
> >>>>> -				pt = &ppt->power_tune_table;
> >>>>> -			}
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >tdp =
> >>>> le16_to_cpu(pt->usTDP);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> configurable_tdp =
> >>>>> -				le16_to_cpu(pt->usConfigurableTDP);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >tdc =
> >>>> le16_to_cpu(pt->usTDC);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> battery_power_limit =
> >>>>> -				le16_to_cpu(pt-
> >usBatteryPowerLimit);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> small_power_limit =
> >>>>> -				le16_to_cpu(pt->usSmallPowerLimit);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> low_cac_leakage =
> >>>>> -				le16_to_cpu(pt->usLowCACLeakage);
> >>>>> -			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> high_cac_leakage =
> >>>>> -				le16_to_cpu(pt->usHighCACLeakage);
> >>>>> -		}
> >>>>> -		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> >>>>> -				ext_hdr->usSclkVddgfxTableOffset) {
> >>>>> -			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> -				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> -				 le16_to_cpu(ext_hdr-
> >>>>> usSclkVddgfxTableOffset));
> >>>>> -			ret = amdgpu_parse_clk_voltage_dep_table(
> >>>>> -					&adev-
> >>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> >>>>> -					dep_table);
> >>>>> -			if (ret) {
> >>>>> -				kfree(adev-
> >>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> >>>>> -				return ret;
> >>>>> -			}
> >>>>> -		}
> >>>>> -	}
> >>>>> -
> >>>>> -	return 0;
> >>>>> -}
> >>>>> -
> >>>>> -void amdgpu_free_extended_power_table(struct amdgpu_device
> >> *adev)
> >>>>> -{
> >>>>> -	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >>>>> pm.dpm.dyn_state;
> >>>>> -
> >>>>> -	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> >>>>> -	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> >>>>> -	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> >>>>> -	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> >>>>> -	kfree(dyn_state->cac_leakage_table.entries);
> >>>>> -	kfree(dyn_state->phase_shedding_limits_table.entries);
> >>>>> -	kfree(dyn_state->ppm_table);
> >>>>> -	kfree(dyn_state->cac_tdp_table);
> >>>>> -	kfree(dyn_state-
> >vce_clock_voltage_dependency_table.entries);
> >>>>> -	kfree(dyn_state-
> >uvd_clock_voltage_dependency_table.entries);
> >>>>> -	kfree(dyn_state-
> >samu_clock_voltage_dependency_table.entries);
> >>>>> -	kfree(dyn_state-
> >acp_clock_voltage_dependency_table.entries);
> >>>>> -	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> >>>>> -}
> >>>>> -
> >>>>> -static const char *pp_lib_thermal_controller_names[] = {
> >>>>> -	"NONE",
> >>>>> -	"lm63",
> >>>>> -	"adm1032",
> >>>>> -	"adm1030",
> >>>>> -	"max6649",
> >>>>> -	"lm64",
> >>>>> -	"f75375",
> >>>>> -	"RV6xx",
> >>>>> -	"RV770",
> >>>>> -	"adt7473",
> >>>>> -	"NONE",
> >>>>> -	"External GPIO",
> >>>>> -	"Evergreen",
> >>>>> -	"emc2103",
> >>>>> -	"Sumo",
> >>>>> -	"Northern Islands",
> >>>>> -	"Southern Islands",
> >>>>> -	"lm96163",
> >>>>> -	"Sea Islands",
> >>>>> -	"Kaveri/Kabini",
> >>>>> -};
> >>>>> -
> >>>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> >>>>> -{
> >>>>> -	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> -	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> >>>>> -	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> -	ATOM_PPLIB_THERMALCONTROLLER *controller;
> >>>>> -	struct amdgpu_i2c_bus_rec i2c_bus;
> >>>>> -	u16 data_offset;
> >>>>> -	u8 frev, crev;
> >>>>> -
> >>>>> -	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> -				   &frev, &crev, &data_offset))
> >>>>> -		return;
> >>>>> -	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> >>>>> -		(mode_info->atom_context->bios + data_offset);
> >>>>> -	controller = &power_table->sThermalController;
> >>>>> -
> >>>>> -	/* add the i2c bus for thermal/fan chip */
> >>>>> -	if (controller->ucType > 0) {
> >>>>> -		if (controller->ucFanParameters &
> >>>> ATOM_PP_FANPARAMETERS_NOFAN)
> >>>>> -			adev->pm.no_fan = true;
> >>>>> -		adev->pm.fan_pulses_per_revolution =
> >>>>> -			controller->ucFanParameters &
> >>>>
> >>
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> >>>> ASK;
> >>>>> -		if (adev->pm.fan_pulses_per_revolution) {
> >>>>> -			adev->pm.fan_min_rpm = controller-
> >ucFanMinRPM;
> >>>>> -			adev->pm.fan_max_rpm = controller-
> >>>>> ucFanMaxRPM;
> >>>>> -		}
> >>>>> -		if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_RV6XX;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_RV770) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_RV770;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EVERGREEN;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_SUMO) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_SUMO;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> THERMAL_TYPE_NI;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> THERMAL_TYPE_SI;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> THERMAL_TYPE_CI;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> >>>>> -			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> THERMAL_TYPE_KV;
> >>>>> -		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> >>>>> -			DRM_INFO("External GPIO thermal
> controller %s fan
> >>>> control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EXTERNAL_GPIO;
> >>>>> -		} else if (controller->ucType ==
> >>>>> -
> >>>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> >>>>> -			DRM_INFO("ADT7473 with internal thermal
> >>>> controller %s fan control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> >>>>> -		} else if (controller->ucType ==
> >>>>> -
> >>>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> >>>>> -			DRM_INFO("EMC2103 with internal thermal
> >>>> controller %s fan control\n",
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> >>>>> -		} else if (controller->ucType <
> >>>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> >>>>> -			DRM_INFO("Possible %s thermal controller at
> >>>> 0x%02x %s fan control\n",
> >>>>> -
> >>>> pp_lib_thermal_controller_names[controller->ucType],
> >>>>> -				 controller->ucI2cAddress >> 1,
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EXTERNAL;
> >>>>> -			i2c_bus =
> amdgpu_atombios_lookup_i2c_gpio(adev,
> >>>> controller->ucI2cLine);
> >>>>> -			adev->pm.i2c_bus =
> amdgpu_i2c_lookup(adev,
> >>>> &i2c_bus);
> >>>>> -			if (adev->pm.i2c_bus) {
> >>>>> -				struct i2c_board_info info = { };
> >>>>> -				const char *name =
> >>>> pp_lib_thermal_controller_names[controller->ucType];
> >>>>> -				info.addr = controller-
> >ucI2cAddress >> 1;
> >>>>> -				strlcpy(info.type, name,
> sizeof(info.type));
> >>>>> -				i2c_new_client_device(&adev-
> >pm.i2c_bus-
> >>>>> adapter, &info);
> >>>>> -			}
> >>>>> -		} else {
> >>>>> -			DRM_INFO("Unknown thermal controller
> type %d at
> >>>> 0x%02x %s fan control\n",
> >>>>> -				 controller->ucType,
> >>>>> -				 controller->ucI2cAddress >> 1,
> >>>>> -				 (controller->ucFanParameters &
> >>>>> -
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> -		}
> >>>>> -	}
> >>>>> -}
> >>>>> -
> >>>>> -struct amd_vce_state*
> >>>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx)
> >>>>> -{
> >>>>> -	struct amdgpu_device *adev = (struct amdgpu_device
> *)handle;
> >>>>> -
> >>>>> -	if (idx < adev->pm.dpm.num_of_vce_states)
> >>>>> -		return &adev->pm.dpm.vce_states[idx];
> >>>>> -
> >>>>> -	return NULL;
> >>>>> -}
> >>>>> -
> >>>>>     int amdgpu_dpm_get_sclk(struct amdgpu_device *adev, bool low)
> >>>>>     {
> >>>>>     	const struct amd_pm_funcs *pp_funcs = adev-
> >powerplay.pp_funcs;
> >>>>> @@ -1243,211 +465,6 @@ void
> >>>> amdgpu_dpm_thermal_work_handler(struct work_struct *work)
> >>>>>     	amdgpu_pm_compute_clocks(adev);
> >>>>>     }
> >>>>>
> >>>>> -static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> >>>> amdgpu_device *adev,
> >>>>> -						     enum
> >>>> amd_pm_state_type dpm_state)
> >>>>> -{
> >>>>> -	int i;
> >>>>> -	struct amdgpu_ps *ps;
> >>>>> -	u32 ui_class;
> >>>>> -	bool single_display = (adev-
> >pm.dpm.new_active_crtc_count < 2) ?
> >>>>> -		true : false;
> >>>>> -
> >>>>> -	/* check if the vblank period is too short to adjust the mclk */
> >>>>> -	if (single_display && adev->powerplay.pp_funcs-
> >vblank_too_short)
> >>>> {
> >>>>> -		if (amdgpu_dpm_vblank_too_short(adev))
> >>>>> -			single_display = false;
> >>>>> -	}
> >>>>> -
> >>>>> -	/* certain older asics have a separare 3D performance state,
> >>>>> -	 * so try that first if the user selected performance
> >>>>> -	 */
> >>>>> -	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> >>>>> -		dpm_state =
> POWER_STATE_TYPE_INTERNAL_3DPERF;
> >>>>> -	/* balanced states don't exist at the moment */
> >>>>> -	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> >>>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>>>> -
> >>>>> -restart_search:
> >>>>> -	/* Pick the best power state based on current conditions */
> >>>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> >>>>> -		ps = &adev->pm.dpm.ps[i];
> >>>>> -		ui_class = ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> >>>>> -		switch (dpm_state) {
> >>>>> -		/* user states */
> >>>>> -		case POWER_STATE_TYPE_BATTERY:
> >>>>> -			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> >>>>> -				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> -					if (single_display)
> >>>>> -						return ps;
> >>>>> -				} else
> >>>>> -					return ps;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_BALANCED:
> >>>>> -			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> >>>>> -				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> -					if (single_display)
> >>>>> -						return ps;
> >>>>> -				} else
> >>>>> -					return ps;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_PERFORMANCE:
> >>>>> -			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> >>>>> -				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> -					if (single_display)
> >>>>> -						return ps;
> >>>>> -				} else
> >>>>> -					return ps;
> >>>>> -			}
> >>>>> -			break;
> >>>>> -		/* internal states */
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD:
> >>>>> -			if (adev->pm.dpm.uvd_ps)
> >>>>> -				return adev->pm.dpm.uvd_ps;
> >>>>> -			else
> >>>>> -				break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>>>> -			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>>>> -			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>>>> -			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>>>> -			if (ps->class2 &
> >>>> ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_BOOT:
> >>>>> -			return adev->pm.dpm.boot_ps;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>>>> -			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>>>> -			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_ULV:
> >>>>> -			if (ps->class2 &
> ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>>>> -			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>>>> -				return ps;
> >>>>> -			break;
> >>>>> -		default:
> >>>>> -			break;
> >>>>> -		}
> >>>>> -	}
> >>>>> -	/* use a fallback state if we didn't match */
> >>>>> -	switch (dpm_state) {
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>>>> -		dpm_state =
> POWER_STATE_TYPE_INTERNAL_UVD_HD;
> >>>>> -		goto restart_search;
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>>>> -		if (adev->pm.dpm.uvd_ps) {
> >>>>> -			return adev->pm.dpm.uvd_ps;
> >>>>> -		} else {
> >>>>> -			dpm_state =
> POWER_STATE_TYPE_PERFORMANCE;
> >>>>> -			goto restart_search;
> >>>>> -		}
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>>>> -		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> >>>>> -		goto restart_search;
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>>>> -		dpm_state = POWER_STATE_TYPE_BATTERY;
> >>>>> -		goto restart_search;
> >>>>> -	case POWER_STATE_TYPE_BATTERY:
> >>>>> -	case POWER_STATE_TYPE_BALANCED:
> >>>>> -	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>>>> -		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>>>> -		goto restart_search;
> >>>>> -	default:
> >>>>> -		break;
> >>>>> -	}
> >>>>> -
> >>>>> -	return NULL;
> >>>>> -}
> >>>>> -
> >>>>> -static void amdgpu_dpm_change_power_state_locked(struct
> >>>> amdgpu_device *adev)
> >>>>> -{
> >>>>> -	struct amdgpu_ps *ps;
> >>>>> -	enum amd_pm_state_type dpm_state;
> >>>>> -	int ret;
> >>>>> -	bool equal = false;
> >>>>> -
> >>>>> -	/* if dpm init failed */
> >>>>> -	if (!adev->pm.dpm_enabled)
> >>>>> -		return;
> >>>>> -
> >>>>> -	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> >>>>> -		/* add other state override checks here */
> >>>>> -		if ((!adev->pm.dpm.thermal_active) &&
> >>>>> -		    (!adev->pm.dpm.uvd_active))
> >>>>> -			adev->pm.dpm.state = adev-
> >pm.dpm.user_state;
> >>>>> -	}
> >>>>> -	dpm_state = adev->pm.dpm.state;
> >>>>> -
> >>>>> -	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> >>>>> -	if (ps)
> >>>>> -		adev->pm.dpm.requested_ps = ps;
> >>>>> -	else
> >>>>> -		return;
> >>>>> -
> >>>>> -	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >>>>> print_power_state) {
> >>>>> -		printk("switching from power state:\n");
> >>>>> -		amdgpu_dpm_print_power_state(adev, adev-
> >>>>> pm.dpm.current_ps);
> >>>>> -		printk("switching to power state:\n");
> >>>>> -		amdgpu_dpm_print_power_state(adev, adev-
> >>>>> pm.dpm.requested_ps);
> >>>>> -	}
> >>>>> -
> >>>>> -	/* update whether vce is active */
> >>>>> -	ps->vce_active = adev->pm.dpm.vce_active;
> >>>>> -	if (adev->powerplay.pp_funcs-
> >display_configuration_changed)
> >>>>> -		amdgpu_dpm_display_configuration_changed(adev);
> >>>>> -
> >>>>> -	ret = amdgpu_dpm_pre_set_power_state(adev);
> >>>>> -	if (ret)
> >>>>> -		return;
> >>>>> -
> >>>>> -	if (adev->powerplay.pp_funcs->check_state_equal) {
> >>>>> -		if (0 != amdgpu_dpm_check_state_equal(adev,
> adev-
> >>>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> >>>>> -			equal = false;
> >>>>> -	}
> >>>>> -
> >>>>> -	if (equal)
> >>>>> -		return;
> >>>>> -
> >>>>> -	if (adev->powerplay.pp_funcs->set_power_state)
> >>>>> -		adev->powerplay.pp_funcs-
> >set_power_state(adev-
> >>>>> powerplay.pp_handle);
> >>>>> -
> >>>>> -	amdgpu_dpm_post_set_power_state(adev);
> >>>>> -
> >>>>> -	adev->pm.dpm.current_active_crtcs = adev-
> >>>>> pm.dpm.new_active_crtcs;
> >>>>> -	adev->pm.dpm.current_active_crtc_count = adev-
> >>>>> pm.dpm.new_active_crtc_count;
> >>>>> -
> >>>>> -	if (adev->powerplay.pp_funcs->force_performance_level) {
> >>>>> -		if (adev->pm.dpm.thermal_active) {
> >>>>> -			enum amd_dpm_forced_level level = adev-
> >>>>> pm.dpm.forced_level;
> >>>>> -			/* force low perf level for thermal */
> >>>>> -
> 	amdgpu_dpm_force_performance_level(adev,
> >>>> AMD_DPM_FORCED_LEVEL_LOW);
> >>>>> -			/* save the user's level */
> >>>>> -			adev->pm.dpm.forced_level = level;
> >>>>> -		} else {
> >>>>> -			/* otherwise, user selected level */
> >>>>> -
> 	amdgpu_dpm_force_performance_level(adev,
> >>>> adev->pm.dpm.forced_level);
> >>>>> -		}
> >>>>> -	}
> >>>>> -}
> >>>>> -
> >>>>>     void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
> >>>>>     {
> >>>>
> >>>> Rename to amdgpu_dpm_compute_clocks?
> >>> [Quan, Evan] Sure, I can do that.
> >>>>
> >>>>>     	int i = 0;
> >>>>> @@ -1464,9 +481,12 @@ void amdgpu_pm_compute_clocks(struct
> >>>> amdgpu_device *adev)
> >>>>>     			amdgpu_fence_wait_empty(ring);
> >>>>>     	}
> >>>>>
> >>>>> -	if (adev->powerplay.pp_funcs->dispatch_tasks) {
> >>>>> +	if ((adev->family == AMDGPU_FAMILY_SI) ||
> >>>>> +	     (adev->family == AMDGPU_FAMILY_KV)) {
> >>>>> +		amdgpu_dpm_get_active_displays(adev);
> >>>>> +		adev->powerplay.pp_funcs-
> >change_power_state(adev-
> >>>>> powerplay.pp_handle);
> >>>>
> >>>> It would be clearer if the newly added logic in this function is in
> >>>> another patch. This does more than what the patch subject says.
> >>> [Quan, Evan] Actually there are no new logic added. These are for
> "!adev-
> >>> powerplay.pp_funcs->dispatch_tasks".
> >>> Considering there are actually only SI and KV which do not have -
> >>> dispatch_tasks() implemented.
> >>> So, I used "((adev->family == AMDGPU_FAMILY_SI) ||(adev->family ==
> >> AMDGPU_FAMILY_KV))" here.
> >>> Maybe i should stick with "!adev->powerplay.pp_funcs-
> >dispatch_tasks"?
> >>
> >> This change also adds a new callback change_power_state(). I interpreted
> >> it as something different from what the patch subject says.
> >>
> >>>>
> >>>>> +	} else {
> >>>>>     		if (!amdgpu_device_has_dc_support(adev)) {
> >>>>> -			mutex_lock(&adev->pm.mutex);
> >>>>>     			amdgpu_dpm_get_active_displays(adev);
> >>>>>     			adev->pm.pm_display_cfg.num_display =
> adev-
> >>>>> pm.dpm.new_active_crtc_count;
> >>>>>     			adev->pm.pm_display_cfg.vrefresh =
> >>>> amdgpu_dpm_get_vrefresh(adev);
> >>>>> @@ -1480,14 +500,8 @@ void amdgpu_pm_compute_clocks(struct
> >>>> amdgpu_device *adev)
> >>>>>     				adev->powerplay.pp_funcs-
> >>>>> display_configuration_change(
> >>>>>     							adev-
> >>>>> powerplay.pp_handle,
> >>>>>     							&adev-
> >>>>> pm.pm_display_cfg);
> >>>>> -			mutex_unlock(&adev->pm.mutex);
> >>>>>     		}
> >>>>>     		amdgpu_dpm_dispatch_task(adev,
> >>>> AMD_PP_TASK_DISPLAY_CONFIG_CHANGE, NULL);
> >>>>> -	} else {
> >>>>> -		mutex_lock(&adev->pm.mutex);
> >>>>> -		amdgpu_dpm_get_active_displays(adev);
> >>>>> -		amdgpu_dpm_change_power_state_locked(adev);
> >>>>> -		mutex_unlock(&adev->pm.mutex);
> >>>>>     	}
> >>>>>     }
> >>>>>
> >>>>> @@ -1550,18 +564,6 @@ void amdgpu_dpm_enable_vce(struct
> >>>> amdgpu_device *adev, bool enable)
> >>>>>     	}
> >>>>>     }
> >>>>>
> >>>>> -void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> >>>>> -{
> >>>>> -	int i;
> >>>>> -
> >>>>> -	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> >>>>> -		return;
> >>>>> -
> >>>>> -	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> >>>>> -		amdgpu_dpm_print_power_state(adev, &adev-
> >>>>> pm.dpm.ps[i]);
> >>>>> -
> >>>>> -}
> >>>>> -
> >>>>>     void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev,
> bool
> >>>> enable)
> >>>>>     {
> >>>>>     	int ret = 0;
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>>> b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>>>> index 01120b302590..295d2902aef7 100644
> >>>>> --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>>>> +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
> >>>>> @@ -366,24 +366,10 @@ enum amdgpu_display_gap
> >>>>>         AMDGPU_PM_DISPLAY_GAP_IGNORE       = 3,
> >>>>>     };
> >>>>>
> >>>>> -void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> >>>>> -void amdgpu_dpm_print_cap_info(u32 caps);
> >>>>> -void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>>>> -				struct amdgpu_ps *rps);
> >>>>>     u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev);
> >>>>>     int amdgpu_dpm_read_sensor(struct amdgpu_device *adev, enum
> >>>> amd_pp_sensors sensor,
> >>>>>     			   void *data, uint32_t *size);
> >>>>>
> >>>>> -int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> >>>>> -
> >>>>> -int amdgpu_parse_extended_power_table(struct amdgpu_device
> >> *adev);
> >>>>> -void amdgpu_free_extended_power_table(struct amdgpu_device
> >>>> *adev);
> >>>>> -
> >>>>> -void amdgpu_add_thermal_controller(struct amdgpu_device *adev);
> >>>>> -
> >>>>> -struct amd_vce_state*
> >>>>> -amdgpu_get_vce_clock_state(void *handle, u32 idx);
> >>>>> -
> >>>>>     int amdgpu_dpm_set_powergating_by_smu(struct amdgpu_device
> >>>> *adev,
> >>>>>     				      uint32_t block_type, bool gate);
> >>>>>
> >>>>> @@ -438,7 +424,6 @@ void amdgpu_pm_compute_clocks(struct
> >>>> amdgpu_device *adev);
> >>>>>     void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool
> >>>> enable);
> >>>>>     void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
> >>>> enable);
> >>>>>     void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev,
> bool
> >>>> enable);
> >>>>> -void amdgpu_pm_print_power_states(struct amdgpu_device
> *adev);
> >>>>>     int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev,
> >>>> uint32_t *smu_version);
> >>>>>     int amdgpu_dpm_set_light_sbr(struct amdgpu_device *adev, bool
> >>>> enable);
> >>>>>     int amdgpu_dpm_send_hbm_bad_pages_num(struct
> amdgpu_device
> >>>> *adev, uint32_t size);
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>>> b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>>>> index 0fb114adc79f..614d8b6a58ad 100644
> >>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/Makefile
> >>>>> @@ -28,7 +28,7 @@ AMD_POWERPLAY = $(addsuffix
> >>>> /Makefile,$(addprefix $(FULL_AMD_PATH)/pm/powerplay/
> >>>>>
> >>>>>     include $(AMD_POWERPLAY)
> >>>>>
> >>>>> -POWER_MGR-y = amd_powerplay.o
> >>>>> +POWER_MGR-y = amd_powerplay.o legacy_dpm.o
> >>>>>
> >>>>>     POWER_MGR-$(CONFIG_DRM_AMDGPU_CIK)+= kv_dpm.o
> kv_smc.o
> >>>>>
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>>> b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>>>> index 380a5336c74f..90f4c65659e2 100644
> >>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
> >>>>> @@ -36,6 +36,7 @@
> >>>>>
> >>>>>     #include "gca/gfx_7_2_d.h"
> >>>>>     #include "gca/gfx_7_2_sh_mask.h"
> >>>>> +#include "legacy_dpm.h"
> >>>>>
> >>>>>     #define KV_MAX_DEEPSLEEP_DIVIDER_ID     5
> >>>>>     #define KV_MINIMUM_ENGINE_CLOCK         800
> >>>>> @@ -3389,6 +3390,7 @@ static const struct amd_pm_funcs
> >> kv_dpm_funcs
> >>>> = {
> >>>>>     	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >>>>>     	.check_state_equal = kv_check_state_equal,
> >>>>>     	.read_sensor = &kv_dpm_read_sensor,
> >>>>> +	.change_power_state =
> amdgpu_dpm_change_power_state_locked,
> >>>>>     };
> >>>>>
> >>>>>     static const struct amdgpu_irq_src_funcs kv_dpm_irq_funcs = {
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>>>
> >>>> This could get confused with all APIs that support legacy dpms. This
> >>>> file has only a subset of APIs to support legacy dpm. Needs a better
> >>>> name - powerplay_ctrl/powerplay_util ?
> >>> [Quan, Evan] The "legacy_dpm" refers for those logics used only by
> >> si/kv(si_dpm.c, kv_dpm.c).
> >>> Considering these logics are not used at default(radeon driver instead of
> >> amdgpu driver is used to support those legacy ASICs at default).
> >>> We might drop support for them from our amdgpu driver. So, I gather all
> >> those APIs and put them in a new holder.
> >>> Maybe you wrongly treat it as a new holder for powerplay APIs(used by
> >> VI/AI)?
> >>
> >> As it got moved under powerplay, I thought they were also used in AI/VI
> >> powerplay. Otherwise, move si/kv along with this out of powerplay and
> >> keep them separate.
> >>
> >> Thanks,
> >> Lijo
> >>
> >>>
> >>> BR
> >>> Evan
> >>>>
> >>>> Thanks,
> >>>> Lijo
> >>>>
> >>>>> new file mode 100644
> >>>>> index 000000000000..9427c1026e1d
> >>>>> --- /dev/null
> >>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.c
> >>>>> @@ -0,0 +1,1453 @@
> >>>>> +/*
> >>>>> + * Copyright 2021 Advanced Micro Devices, Inc.
> >>>>> + *
> >>>>> + * Permission is hereby granted, free of charge, to any person
> obtaining
> >> a
> >>>>> + * copy of this software and associated documentation files (the
> >>>> "Software"),
> >>>>> + * to deal in the Software without restriction, including without
> >> limitation
> >>>>> + * the rights to use, copy, modify, merge, publish, distribute,
> sublicense,
> >>>>> + * and/or sell copies of the Software, and to permit persons to
> whom
> >> the
> >>>>> + * Software is furnished to do so, subject to the following conditions:
> >>>>> + *
> >>>>> + * The above copyright notice and this permission notice shall be
> >> included
> >>>> in
> >>>>> + * all copies or substantial portions of the Software.
> >>>>> + *
> >>>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
> ANY
> >>>> KIND, EXPRESS OR
> >>>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> >>>> MERCHANTABILITY,
> >>>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> IN
> >>>> NO EVENT SHALL
> >>>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY
> CLAIM,
> >>>> DAMAGES OR
> >>>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT
> OR
> >>>> OTHERWISE,
> >>>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
> >> OR
> >>>> THE USE OR
> >>>>> + * OTHER DEALINGS IN THE SOFTWARE.
> >>>>> + */
> >>>>> +
> >>>>> +#include "amdgpu.h"
> >>>>> +#include "amdgpu_atombios.h"
> >>>>> +#include "amdgpu_i2c.h"
> >>>>> +#include "atom.h"
> >>>>> +#include "amd_pcie.h"
> >>>>> +#include "legacy_dpm.h"
> >>>>> +
> >>>>> +#define amdgpu_dpm_pre_set_power_state(adev) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >>>>> pre_set_power_state((adev)->powerplay.pp_handle))
> >>>>> +
> >>>>> +#define amdgpu_dpm_post_set_power_state(adev) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >>>>> post_set_power_state((adev)->powerplay.pp_handle))
> >>>>> +
> >>>>> +#define amdgpu_dpm_display_configuration_changed(adev) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >>>>> display_configuration_changed((adev)->powerplay.pp_handle))
> >>>>> +
> >>>>> +#define amdgpu_dpm_print_power_state(adev, ps) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >print_power_state((adev)-
> >>>>> powerplay.pp_handle, (ps)))
> >>>>> +
> >>>>> +#define amdgpu_dpm_vblank_too_short(adev) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >vblank_too_short((adev)-
> >>>>> powerplay.pp_handle))
> >>>>> +
> >>>>> +#define amdgpu_dpm_check_state_equal(adev, cps, rps, equal) \
> >>>>> +		((adev)->powerplay.pp_funcs-
> >check_state_equal((adev)-
> >>>>> powerplay.pp_handle, (cps), (rps), (equal)))
> >>>>> +
> >>>>> +int amdgpu_atombios_get_memory_pll_dividers(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> +					    u32 clock,
> >>>>> +					    bool strobe_mode,
> >>>>> +					    struct atom_mpll_param
> >>>> *mpll_param)
> >>>>> +{
> >>>>> +	COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1
> args;
> >>>>> +	int index = GetIndexIntoMasterTable(COMMAND,
> >>>> ComputeMemoryClockParam);
> >>>>> +	u8 frev, crev;
> >>>>> +
> >>>>> +	memset(&args, 0, sizeof(args));
> >>>>> +	memset(mpll_param, 0, sizeof(struct atom_mpll_param));
> >>>>> +
> >>>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
> >>>>> mode_info.atom_context, index, &frev, &crev))
> >>>>> +		return -EINVAL;
> >>>>> +
> >>>>> +	switch (frev) {
> >>>>> +	case 2:
> >>>>> +		switch (crev) {
> >>>>> +		case 1:
> >>>>> +			/* SI */
> >>>>> +			args.ulClock = cpu_to_le32(clock);	/* 10
> khz */
> >>>>> +			args.ucInputFlag = 0;
> >>>>> +			if (strobe_mode)
> >>>>> +				args.ucInputFlag |=
> >>>> MPLL_INPUT_FLAG_STROBE_MODE_EN;
> >>>>> +
> >>>>> +			amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> +
> >>>>> +			mpll_param->clkfrac =
> >>>> le16_to_cpu(args.ulFbDiv.usFbDivFrac);
> >>>>> +			mpll_param->clkf =
> >>>> le16_to_cpu(args.ulFbDiv.usFbDiv);
> >>>>> +			mpll_param->post_div = args.ucPostDiv;
> >>>>> +			mpll_param->dll_speed = args.ucDllSpeed;
> >>>>> +			mpll_param->bwcntl = args.ucBWCntl;
> >>>>> +			mpll_param->vco_mode =
> >>>>> +				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_VCO_MODE_MASK);
> >>>>> +			mpll_param->yclk_sel =
> >>>>> +				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_BYPASS_DQ_PLL) ? 1 : 0;
> >>>>> +			mpll_param->qdr =
> >>>>> +				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_QDR_ENABLE) ? 1 : 0;
> >>>>> +			mpll_param->half_rate =
> >>>>> +				(args.ucPllCntlFlag &
> >>>> MPLL_CNTL_FLAG_AD_HALF_RATE) ? 1 : 0;
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			return -EINVAL;
> >>>>> +		}
> >>>>> +		break;
> >>>>> +	default:
> >>>>> +		return -EINVAL;
> >>>>> +	}
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_atombios_set_engine_dram_timings(struct
> >>>> amdgpu_device *adev,
> >>>>> +					     u32 eng_clock, u32
> mem_clock)
> >>>>> +{
> >>>>> +	SET_ENGINE_CLOCK_PS_ALLOCATION args;
> >>>>> +	int index = GetIndexIntoMasterTable(COMMAND,
> >>>> DynamicMemorySettings);
> >>>>> +	u32 tmp;
> >>>>> +
> >>>>> +	memset(&args, 0, sizeof(args));
> >>>>> +
> >>>>> +	tmp = eng_clock & SET_CLOCK_FREQ_MASK;
> >>>>> +	tmp |= (COMPUTE_ENGINE_PLL_PARAM << 24);
> >>>>> +
> >>>>> +	args.ulTargetEngineClock = cpu_to_le32(tmp);
> >>>>> +	if (mem_clock)
> >>>>> +		args.sReserved.ulClock = cpu_to_le32(mem_clock &
> >>>> SET_CLOCK_FREQ_MASK);
> >>>>> +
> >>>>> +	amdgpu_atom_execute_table(adev-
> >mode_info.atom_context,
> >>>> index, (uint32_t *)&args);
> >>>>> +}
> >>>>> +
> >>>>> +union firmware_info {
> >>>>> +	ATOM_FIRMWARE_INFO info;
> >>>>> +	ATOM_FIRMWARE_INFO_V1_2 info_12;
> >>>>> +	ATOM_FIRMWARE_INFO_V1_3 info_13;
> >>>>> +	ATOM_FIRMWARE_INFO_V1_4 info_14;
> >>>>> +	ATOM_FIRMWARE_INFO_V2_1 info_21;
> >>>>> +	ATOM_FIRMWARE_INFO_V2_2 info_22;
> >>>>> +};
> >>>>> +
> >>>>> +void amdgpu_atombios_get_default_voltages(struct
> amdgpu_device
> >>>> *adev,
> >>>>> +					  u16 *vddc, u16 *vddci, u16
> *mvdd)
> >>>>> +{
> >>>>> +	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> +	int index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
> >>>>> +	u8 frev, crev;
> >>>>> +	u16 data_offset;
> >>>>> +	union firmware_info *firmware_info;
> >>>>> +
> >>>>> +	*vddc = 0;
> >>>>> +	*vddci = 0;
> >>>>> +	*mvdd = 0;
> >>>>> +
> >>>>> +	if (amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> +				   &frev, &crev, &data_offset)) {
> >>>>> +		firmware_info =
> >>>>> +			(union firmware_info *)(mode_info-
> >atom_context-
> >>>>> bios +
> >>>>> +						data_offset);
> >>>>> +		*vddc = le16_to_cpu(firmware_info-
> >>>>> info_14.usBootUpVDDCVoltage);
> >>>>> +		if ((frev == 2) && (crev >= 2)) {
> >>>>> +			*vddci = le16_to_cpu(firmware_info-
> >>>>> info_22.usBootUpVDDCIVoltage);
> >>>>> +			*mvdd = le16_to_cpu(firmware_info-
> >>>>> info_22.usBootUpMVDDCVoltage);
> >>>>> +		}
> >>>>> +	}
> >>>>> +}
> >>>>> +
> >>>>> +union set_voltage {
> >>>>> +	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
> >>>>> +	struct _SET_VOLTAGE_PARAMETERS v1;
> >>>>> +	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
> >>>>> +	struct _SET_VOLTAGE_PARAMETERS_V1_3 v3;
> >>>>> +};
> >>>>> +
> >>>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> >> u8
> >>>> voltage_type,
> >>>>> +			     u16 voltage_id, u16 *voltage)
> >>>>> +{
> >>>>> +	union set_voltage args;
> >>>>> +	int index = GetIndexIntoMasterTable(COMMAND,
> SetVoltage);
> >>>>> +	u8 frev, crev;
> >>>>> +
> >>>>> +	if (!amdgpu_atom_parse_cmd_header(adev-
> >>>>> mode_info.atom_context, index, &frev, &crev))
> >>>>> +		return -EINVAL;
> >>>>> +
> >>>>> +	switch (crev) {
> >>>>> +	case 1:
> >>>>> +		return -EINVAL;
> >>>>> +	case 2:
> >>>>> +		args.v2.ucVoltageType =
> >>>> SET_VOLTAGE_GET_MAX_VOLTAGE;
> >>>>> +		args.v2.ucVoltageMode = 0;
> >>>>> +		args.v2.usVoltageLevel = 0;
> >>>>> +
> >>>>> +		amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> +
> >>>>> +		*voltage = le16_to_cpu(args.v2.usVoltageLevel);
> >>>>> +		break;
> >>>>> +	case 3:
> >>>>> +		args.v3.ucVoltageType = voltage_type;
> >>>>> +		args.v3.ucVoltageMode =
> ATOM_GET_VOLTAGE_LEVEL;
> >>>>> +		args.v3.usVoltageLevel = cpu_to_le16(voltage_id);
> >>>>> +
> >>>>> +		amdgpu_atom_execute_table(adev-
> >>>>> mode_info.atom_context, index, (uint32_t *)&args);
> >>>>> +
> >>>>> +		*voltage = le16_to_cpu(args.v3.usVoltageLevel);
> >>>>> +		break;
> >>>>> +	default:
> >>>>> +		DRM_ERROR("Unknown table version %d, %d\n",
> frev, crev);
> >>>>> +		return -EINVAL;
> >>>>> +	}
> >>>>> +
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +int
> >> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >>>> amdgpu_device *adev,
> >>>>> +						      u16 *voltage,
> >>>>> +						      u16 leakage_idx)
> >>>>> +{
> >>>>> +	return amdgpu_atombios_get_max_vddc(adev,
> >>>> VOLTAGE_TYPE_VDDC, leakage_idx, voltage);
> >>>>> +}
> >>>>> +
> >>>>> +union voltage_object_info {
> >>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO v1;
> >>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V2 v2;
> >>>>> +	struct _ATOM_VOLTAGE_OBJECT_INFO_V3_1 v3;
> >>>>> +};
> >>>>> +
> >>>>> +union voltage_object {
> >>>>> +	struct _ATOM_VOLTAGE_OBJECT v1;
> >>>>> +	struct _ATOM_VOLTAGE_OBJECT_V2 v2;
> >>>>> +	union _ATOM_VOLTAGE_OBJECT_V3 v3;
> >>>>> +};
> >>>>> +
> >>>>> +static ATOM_VOLTAGE_OBJECT_V3
> >>>>
> >>
> *amdgpu_atombios_lookup_voltage_object_v3(ATOM_VOLTAGE_OBJECT_I
> >>>> NFO_V3_1 *v3,
> >>>>> +
> 	u8
> >>>> voltage_type, u8 voltage_mode)
> >>>>> +{
> >>>>> +	u32 size = le16_to_cpu(v3->sHeader.usStructureSize);
> >>>>> +	u32 offset = offsetof(ATOM_VOLTAGE_OBJECT_INFO_V3_1,
> >>>> asVoltageObj[0]);
> >>>>> +	u8 *start = (u8 *)v3;
> >>>>> +
> >>>>> +	while (offset < size) {
> >>>>> +		ATOM_VOLTAGE_OBJECT_V3 *vo =
> >>>> (ATOM_VOLTAGE_OBJECT_V3 *)(start + offset);
> >>>>> +		if ((vo->asGpioVoltageObj.sHeader.ucVoltageType
> ==
> >>>> voltage_type) &&
> >>>>> +		    (vo->asGpioVoltageObj.sHeader.ucVoltageMode
> ==
> >>>> voltage_mode))
> >>>>> +			return vo;
> >>>>> +		offset += le16_to_cpu(vo-
> >>>>> asGpioVoltageObj.sHeader.usSize);
> >>>>> +	}
> >>>>> +	return NULL;
> >>>>> +}
> >>>>> +
> >>>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>>>> +			      u8 voltage_type,
> >>>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id)
> >>>>> +{
> >>>>> +	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> +	u8 frev, crev;
> >>>>> +	u16 data_offset, size;
> >>>>> +	union voltage_object_info *voltage_info;
> >>>>> +	union voltage_object *voltage_object = NULL;
> >>>>> +
> >>>>> +	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> +				   &frev, &crev, &data_offset)) {
> >>>>> +		voltage_info = (union voltage_object_info *)
> >>>>> +			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> +
> >>>>> +		switch (frev) {
> >>>>> +		case 3:
> >>>>> +			switch (crev) {
> >>>>> +			case 1:
> >>>>> +				voltage_object = (union
> voltage_object *)
> >>>>> +
> >>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> +
> >>>> voltage_type,
> >>>>> +
> >>>> VOLTAGE_OBJ_SVID2);
> >>>>> +				if (voltage_object) {
> >>>>> +					*svd_gpio_id =
> voltage_object-
> >>>>> v3.asSVID2Obj.ucSVDGpioId;
> >>>>> +					*svc_gpio_id =
> voltage_object-
> >>>>> v3.asSVID2Obj.ucSVCGpioId;
> >>>>> +				} else {
> >>>>> +					return -EINVAL;
> >>>>> +				}
> >>>>> +				break;
> >>>>> +			default:
> >>>>> +				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> +				return -EINVAL;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> +			return -EINVAL;
> >>>>> +		}
> >>>>> +
> >>>>> +	}
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +bool
> >>>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>>>> +				u8 voltage_type, u8 voltage_mode)
> >>>>> +{
> >>>>> +	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> +	u8 frev, crev;
> >>>>> +	u16 data_offset, size;
> >>>>> +	union voltage_object_info *voltage_info;
> >>>>> +
> >>>>> +	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> +				   &frev, &crev, &data_offset)) {
> >>>>> +		voltage_info = (union voltage_object_info *)
> >>>>> +			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> +
> >>>>> +		switch (frev) {
> >>>>> +		case 3:
> >>>>> +			switch (crev) {
> >>>>> +			case 1:
> >>>>> +				if
> >>>> (amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> +
> >>>> voltage_type, voltage_mode))
> >>>>> +					return true;
> >>>>> +				break;
> >>>>> +			default:
> >>>>> +				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> +				return false;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> +			return false;
> >>>>> +		}
> >>>>> +
> >>>>> +	}
> >>>>> +	return false;
> >>>>> +}
> >>>>> +
> >>>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> >> *adev,
> >>>>> +				      u8 voltage_type, u8
> voltage_mode,
> >>>>> +				      struct atom_voltage_table
> *voltage_table)
> >>>>> +{
> >>>>> +	int index = GetIndexIntoMasterTable(DATA,
> VoltageObjectInfo);
> >>>>> +	u8 frev, crev;
> >>>>> +	u16 data_offset, size;
> >>>>> +	int i;
> >>>>> +	union voltage_object_info *voltage_info;
> >>>>> +	union voltage_object *voltage_object = NULL;
> >>>>> +
> >>>>> +	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> +				   &frev, &crev, &data_offset)) {
> >>>>> +		voltage_info = (union voltage_object_info *)
> >>>>> +			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> +
> >>>>> +		switch (frev) {
> >>>>> +		case 3:
> >>>>> +			switch (crev) {
> >>>>> +			case 1:
> >>>>> +				voltage_object = (union
> voltage_object *)
> >>>>> +
> >>>> 	amdgpu_atombios_lookup_voltage_object_v3(&voltage_info->v3,
> >>>>> +
> >>>> voltage_type, voltage_mode);
> >>>>> +				if (voltage_object) {
> >>>>> +
> 	ATOM_GPIO_VOLTAGE_OBJECT_V3
> >>>> *gpio =
> >>>>> +						&voltage_object-
> >>>>> v3.asGpioVoltageObj;
> >>>>> +					VOLTAGE_LUT_ENTRY_V2
> *lut;
> >>>>> +					if (gpio->ucGpioEntryNum >
> >>>> MAX_VOLTAGE_ENTRIES)
> >>>>> +						return -EINVAL;
> >>>>> +					lut = &gpio->asVolGpioLut[0];
> >>>>> +					for (i = 0; i < gpio-
> >ucGpioEntryNum;
> >>>> i++) {
> >>>>> +						voltage_table-
> >>>>> entries[i].value =
> >>>>> +
> 	le16_to_cpu(lut-
> >>>>> usVoltageValue);
> >>>>> +						voltage_table-
> >>>>> entries[i].smio_low =
> >>>>> +
> 	le32_to_cpu(lut-
> >>>>> ulVoltageId);
> >>>>> +						lut =
> >>>> (VOLTAGE_LUT_ENTRY_V2 *)
> >>>>> +							((u8 *)lut +
> >>>> sizeof(VOLTAGE_LUT_ENTRY_V2));
> >>>>> +					}
> >>>>> +					voltage_table->mask_low =
> >>>> le32_to_cpu(gpio->ulGpioMaskVal);
> >>>>> +					voltage_table->count = gpio-
> >>>>> ucGpioEntryNum;
> >>>>> +					voltage_table->phase_delay
> = gpio-
> >>>>> ucPhaseDelay;
> >>>>> +					return 0;
> >>>>> +				}
> >>>>> +				break;
> >>>>> +			default:
> >>>>> +				DRM_ERROR("unknown voltage
> object
> >>>> table\n");
> >>>>> +				return -EINVAL;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			DRM_ERROR("unknown voltage object
> table\n");
> >>>>> +			return -EINVAL;
> >>>>> +		}
> >>>>> +	}
> >>>>> +	return -EINVAL;
> >>>>> +}
> >>>>> +
> >>>>> +union vram_info {
> >>>>> +	struct _ATOM_VRAM_INFO_V3 v1_3;
> >>>>> +	struct _ATOM_VRAM_INFO_V4 v1_4;
> >>>>> +	struct _ATOM_VRAM_INFO_HEADER_V2_1 v2_1;
> >>>>> +};
> >>>>> +
> >>>>> +#define MEM_ID_MASK           0xff000000
> >>>>> +#define MEM_ID_SHIFT          24
> >>>>> +#define CLOCK_RANGE_MASK      0x00ffffff
> >>>>> +#define CLOCK_RANGE_SHIFT     0
> >>>>> +#define LOW_NIBBLE_MASK       0xf
> >>>>> +#define DATA_EQU_PREV         0
> >>>>> +#define DATA_FROM_TABLE       4
> >>>>> +
> >>>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> >> *adev,
> >>>>> +				      u8 module_index,
> >>>>> +				      struct atom_mc_reg_table
> *reg_table)
> >>>>> +{
> >>>>> +	int index = GetIndexIntoMasterTable(DATA, VRAM_Info);
> >>>>> +	u8 frev, crev, num_entries, t_mem_id, num_ranges = 0;
> >>>>> +	u32 i = 0, j;
> >>>>> +	u16 data_offset, size;
> >>>>> +	union vram_info *vram_info;
> >>>>> +
> >>>>> +	memset(reg_table, 0, sizeof(struct atom_mc_reg_table));
> >>>>> +
> >>>>> +	if (amdgpu_atom_parse_data_header(adev-
> >>>>> mode_info.atom_context, index, &size,
> >>>>> +				   &frev, &crev, &data_offset)) {
> >>>>> +		vram_info = (union vram_info *)
> >>>>> +			(adev->mode_info.atom_context->bios +
> >>>> data_offset);
> >>>>> +		switch (frev) {
> >>>>> +		case 1:
> >>>>> +			DRM_ERROR("old table version %d, %d\n",
> frev,
> >>>> crev);
> >>>>> +			return -EINVAL;
> >>>>> +		case 2:
> >>>>> +			switch (crev) {
> >>>>> +			case 1:
> >>>>> +				if (module_index < vram_info-
> >>>>> v2_1.ucNumOfVRAMModule) {
> >>>>> +					ATOM_INIT_REG_BLOCK
> *reg_block
> >>>> =
> >>>>> +
> 	(ATOM_INIT_REG_BLOCK *)
> >>>>> +						((u8 *)vram_info +
> >>>> le16_to_cpu(vram_info->v2_1.usMemClkPatchTblOffset));
> >>>>> +
> >>>> 	ATOM_MEMORY_SETTING_DATA_BLOCK *reg_data =
> >>>>> +
> >>>> 	(ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>>>> +						((u8 *)reg_block + (2
> *
> >>>> sizeof(u16)) +
> >>>>> +
> le16_to_cpu(reg_block-
> >>>>> usRegIndexTblSize));
> >>>>> +
> 	ATOM_INIT_REG_INDEX_FORMAT
> >>>> *format = &reg_block->asRegIndexBuf[0];
> >>>>> +					num_entries =
> >>>> (u8)((le16_to_cpu(reg_block->usRegIndexTblSize)) /
> >>>>> +
> >>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT)) - 1;
> >>>>> +					if (num_entries >
> >>>> VBIOS_MC_REGISTER_ARRAY_SIZE)
> >>>>> +						return -EINVAL;
> >>>>> +					while (i < num_entries) {
> >>>>> +						if (format-
> >>>>> ucPreRegDataLength & ACCESS_PLACEHOLDER)
> >>>>> +							break;
> >>>>> +						reg_table-
> >>>>> mc_reg_address[i].s1 =
> >>>>> +
> >>>> 	(u16)(le16_to_cpu(format->usRegIndex));
> >>>>> +						reg_table-
> >>>>> mc_reg_address[i].pre_reg_data =
> >>>>> +							(u8)(format-
> >>>>> ucPreRegDataLength);
> >>>>> +						i++;
> >>>>> +						format =
> >>>> (ATOM_INIT_REG_INDEX_FORMAT *)
> >>>>> +							((u8 *)format
> +
> >>>> sizeof(ATOM_INIT_REG_INDEX_FORMAT));
> >>>>> +					}
> >>>>> +					reg_table->last = i;
> >>>>> +					while ((le32_to_cpu(*(u32
> >>>> *)reg_data) != END_OF_REG_DATA_BLOCK) &&
> >>>>> +					       (num_ranges <
> >>>> VBIOS_MAX_AC_TIMING_ENTRIES)) {
> >>>>> +						t_mem_id =
> >>>> (u8)((le32_to_cpu(*(u32 *)reg_data) & MEM_ID_MASK)
> >>>>> +								>>
> >>>> MEM_ID_SHIFT);
> >>>>> +						if (module_index ==
> >>>> t_mem_id) {
> >>>>> +							reg_table-
> >>>>> mc_reg_table_entry[num_ranges].mclk_max =
> >>>>> +
> >>>> 	(u32)((le32_to_cpu(*(u32 *)reg_data) & CLOCK_RANGE_MASK)
> >>>>> +								      >>
> >>>> CLOCK_RANGE_SHIFT);
> >>>>> +							for (i = 0, j = 1;
> i <
> >>>> reg_table->last; i++) {
> >>>>> +								if
> ((reg_table-
> >>>>> mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) ==
> >>>> DATA_FROM_TABLE) {
> >>>>> +
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>>>> +
> >>>> 	(u32)le32_to_cpu(*((u32 *)reg_data + j));
> >>>>> +
> 	j++;
> >>>>> +								} else
> if
> >>>> ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK)
> ==
> >>>> DATA_EQU_PREV) {
> >>>>> +
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i] =
> >>>>> +
> >>>> 	reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1];
> >>>>> +								}
> >>>>> +							}
> >>>>> +
> 	num_ranges++;
> >>>>> +						}
> >>>>> +						reg_data =
> >>>> (ATOM_MEMORY_SETTING_DATA_BLOCK *)
> >>>>> +							((u8
> *)reg_data +
> >>>> le16_to_cpu(reg_block->usRegDataBlkSize));
> >>>>> +					}
> >>>>> +					if (le32_to_cpu(*(u32
> *)reg_data) !=
> >>>> END_OF_REG_DATA_BLOCK)
> >>>>> +						return -EINVAL;
> >>>>> +					reg_table->num_entries =
> >>>> num_ranges;
> >>>>> +				} else
> >>>>> +					return -EINVAL;
> >>>>> +				break;
> >>>>> +			default:
> >>>>> +				DRM_ERROR("Unknown table
> >>>> version %d, %d\n", frev, crev);
> >>>>> +				return -EINVAL;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			DRM_ERROR("Unknown table
> version %d, %d\n",
> >>>> frev, crev);
> >>>>> +			return -EINVAL;
> >>>>> +		}
> >>>>> +		return 0;
> >>>>> +	}
> >>>>> +	return -EINVAL;
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2)
> >>>>> +{
> >>>>> +	const char *s;
> >>>>> +
> >>>>> +	switch (class & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
> >>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_NONE:
> >>>>> +	default:
> >>>>> +		s = "none";
> >>>>> +		break;
> >>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BATTERY:
> >>>>> +		s = "battery";
> >>>>> +		break;
> >>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_BALANCED:
> >>>>> +		s = "balanced";
> >>>>> +		break;
> >>>>> +	case ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE:
> >>>>> +		s = "performance";
> >>>>> +		break;
> >>>>> +	}
> >>>>> +	printk("\tui class: %s\n", s);
> >>>>> +	printk("\tinternal class:");
> >>>>> +	if (((class & ~ATOM_PPLIB_CLASSIFICATION_UI_MASK) == 0)
> &&
> >>>>> +	    (class2 == 0))
> >>>>> +		pr_cont(" none");
> >>>>> +	else {
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_BOOT)
> >>>>> +			pr_cont(" boot");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>>>> +			pr_cont(" thermal");
> >>>>> +		if (class &
> >>>> ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
> >>>>> +			pr_cont(" limited_pwr");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_REST)
> >>>>> +			pr_cont(" rest");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_FORCED)
> >>>>> +			pr_cont(" forced");
> >>>>> +		if (class &
> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>>>> +			pr_cont(" 3d_perf");
> >>>>> +		if (class &
> >>>> ATOM_PPLIB_CLASSIFICATION_OVERDRIVETEMPLATE)
> >>>>> +			pr_cont(" ovrdrv");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
> >>>>> +			pr_cont(" uvd");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_3DLOW)
> >>>>> +			pr_cont(" 3d_low");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>>>> +			pr_cont(" acpi");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>>>> +			pr_cont(" uvd_hd2");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>>>> +			pr_cont(" uvd_hd");
> >>>>> +		if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>>>> +			pr_cont(" uvd_sd");
> >>>>> +		if (class2 &
> >>>> ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
> >>>>> +			pr_cont(" limited_pwr2");
> >>>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>>>> +			pr_cont(" ulv");
> >>>>> +		if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>>>> +			pr_cont(" uvd_mvc");
> >>>>> +	}
> >>>>> +	pr_cont("\n");
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_dpm_print_cap_info(u32 caps)
> >>>>> +{
> >>>>> +	printk("\tcaps:");
> >>>>> +	if (caps & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
> >>>>> +		pr_cont(" single_disp");
> >>>>> +	if (caps & ATOM_PPLIB_SUPPORTS_VIDEO_PLAYBACK)
> >>>>> +		pr_cont(" video");
> >>>>> +	if (caps & ATOM_PPLIB_DISALLOW_ON_DC)
> >>>>> +		pr_cont(" no_dc");
> >>>>> +	pr_cont("\n");
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>>>> +				struct amdgpu_ps *rps)
> >>>>> +{
> >>>>> +	printk("\tstatus:");
> >>>>> +	if (rps == adev->pm.dpm.current_ps)
> >>>>> +		pr_cont(" c");
> >>>>> +	if (rps == adev->pm.dpm.requested_ps)
> >>>>> +		pr_cont(" r");
> >>>>> +	if (rps == adev->pm.dpm.boot_ps)
> >>>>> +		pr_cont(" b");
> >>>>> +	pr_cont("\n");
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
> >>>>> +{
> >>>>> +	int i;
> >>>>> +
> >>>>> +	if (adev->powerplay.pp_funcs->print_power_state == NULL)
> >>>>> +		return;
> >>>>> +
> >>>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++)
> >>>>> +		amdgpu_dpm_print_power_state(adev, &adev-
> >>>>> pm.dpm.ps[i]);
> >>>>> +
> >>>>> +}
> >>>>> +
> >>>>> +union power_info {
> >>>>> +	struct _ATOM_POWERPLAY_INFO info;
> >>>>> +	struct _ATOM_POWERPLAY_INFO_V2 info_2;
> >>>>> +	struct _ATOM_POWERPLAY_INFO_V3 info_3;
> >>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE pplib;
> >>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE2 pplib2;
> >>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE3 pplib3;
> >>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE4 pplib4;
> >>>>> +	struct _ATOM_PPLIB_POWERPLAYTABLE5 pplib5;
> >>>>> +};
> >>>>> +
> >>>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev)
> >>>>> +{
> >>>>> +	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> +	union power_info *power_info;
> >>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> +	u16 data_offset;
> >>>>> +	u8 frev, crev;
> >>>>> +
> >>>>> +	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> +				   &frev, &crev, &data_offset))
> >>>>> +		return -EINVAL;
> >>>>> +	power_info = (union power_info *)(mode_info-
> >atom_context-
> >>>>> bios + data_offset);
> >>>>> +
> >>>>> +	adev->pm.dpm.platform_caps = le32_to_cpu(power_info-
> >>>>> pplib.ulPlatformCaps);
> >>>>> +	adev->pm.dpm.backbias_response_time =
> >>>> le16_to_cpu(power_info->pplib.usBackbiasTime);
> >>>>> +	adev->pm.dpm.voltage_response_time =
> le16_to_cpu(power_info-
> >>>>> pplib.usVoltageTime);
> >>>>> +
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +union fan_info {
> >>>>> +	struct _ATOM_PPLIB_FANTABLE fan;
> >>>>> +	struct _ATOM_PPLIB_FANTABLE2 fan2;
> >>>>> +	struct _ATOM_PPLIB_FANTABLE3 fan3;
> >>>>> +};
> >>>>> +
> >>>>> +static int amdgpu_parse_clk_voltage_dep_table(struct
> >>>> amdgpu_clock_voltage_dependency_table *amdgpu_table,
> >>>>> +
> >>>> ATOM_PPLIB_Clock_Voltage_Dependency_Table *atom_table)
> >>>>> +{
> >>>>> +	u32 size = atom_table->ucNumEntries *
> >>>>> +		sizeof(struct
> amdgpu_clock_voltage_dependency_entry);
> >>>>> +	int i;
> >>>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Record *entry;
> >>>>> +
> >>>>> +	amdgpu_table->entries = kzalloc(size, GFP_KERNEL);
> >>>>> +	if (!amdgpu_table->entries)
> >>>>> +		return -ENOMEM;
> >>>>> +
> >>>>> +	entry = &atom_table->entries[0];
> >>>>> +	for (i = 0; i < atom_table->ucNumEntries; i++) {
> >>>>> +		amdgpu_table->entries[i].clk = le16_to_cpu(entry-
> >>>>> usClockLow) |
> >>>>> +			(entry->ucClockHigh << 16);
> >>>>> +		amdgpu_table->entries[i].v = le16_to_cpu(entry-
> >>>>> usVoltage);
> >>>>> +		entry =
> (ATOM_PPLIB_Clock_Voltage_Dependency_Record
> >>>> *)
> >>>>> +			((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_Clock_Voltage_Dependency_Record));
> >>>>> +	}
> >>>>> +	amdgpu_table->count = atom_table->ucNumEntries;
> >>>>> +
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +/* sizeof(ATOM_PPLIB_EXTENDEDHEADER) */
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2 12
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3 14
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4 16
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5 18
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6 20
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7 22
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8 24
> >>>>> +#define SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V9 26
> >>>>> +
> >>>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
> >> *adev)
> >>>>> +{
> >>>>> +	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> +	union power_info *power_info;
> >>>>> +	union fan_info *fan_info;
> >>>>> +	ATOM_PPLIB_Clock_Voltage_Dependency_Table
> *dep_table;
> >>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> +	u16 data_offset;
> >>>>> +	u8 frev, crev;
> >>>>> +	int ret, i;
> >>>>> +
> >>>>> +	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> +				   &frev, &crev, &data_offset))
> >>>>> +		return -EINVAL;
> >>>>> +	power_info = (union power_info *)(mode_info-
> >atom_context-
> >>>>> bios + data_offset);
> >>>>> +
> >>>>> +	/* fan table */
> >>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>>>> +		if (power_info->pplib3.usFanTableOffset) {
> >>>>> +			fan_info = (union fan_info *)(mode_info-
> >>>>> atom_context->bios + data_offset +
> >>>>> +
> le16_to_cpu(power_info-
> >>>>> pplib3.usFanTableOffset));
> >>>>> +			adev->pm.dpm.fan.t_hyst = fan_info-
> >fan.ucTHyst;
> >>>>> +			adev->pm.dpm.fan.t_min =
> le16_to_cpu(fan_info-
> >>>>> fan.usTMin);
> >>>>> +			adev->pm.dpm.fan.t_med =
> le16_to_cpu(fan_info-
> >>>>> fan.usTMed);
> >>>>> +			adev->pm.dpm.fan.t_high =
> le16_to_cpu(fan_info-
> >>>>> fan.usTHigh);
> >>>>> +			adev->pm.dpm.fan.pwm_min =
> >>>> le16_to_cpu(fan_info->fan.usPWMMin);
> >>>>> +			adev->pm.dpm.fan.pwm_med =
> >>>> le16_to_cpu(fan_info->fan.usPWMMed);
> >>>>> +			adev->pm.dpm.fan.pwm_high =
> >>>> le16_to_cpu(fan_info->fan.usPWMHigh);
> >>>>> +			if (fan_info->fan.ucFanTableFormat >= 2)
> >>>>> +				adev->pm.dpm.fan.t_max =
> >>>> le16_to_cpu(fan_info->fan2.usTMax);
> >>>>> +			else
> >>>>> +				adev->pm.dpm.fan.t_max = 10900;
> >>>>> +			adev->pm.dpm.fan.cycle_delay = 100000;
> >>>>> +			if (fan_info->fan.ucFanTableFormat >= 3) {
> >>>>> +				adev->pm.dpm.fan.control_mode =
> >>>> fan_info->fan3.ucFanControlMode;
> >>>>> +				adev-
> >pm.dpm.fan.default_max_fan_pwm
> >>>> =
> >>>>> +					le16_to_cpu(fan_info-
> >>>>> fan3.usFanPWMMax);
> >>>>> +				adev-
> >>>>> pm.dpm.fan.default_fan_output_sensitivity = 4836;
> >>>>> +				adev-
> >pm.dpm.fan.fan_output_sensitivity =
> >>>>> +					le16_to_cpu(fan_info-
> >>>>> fan3.usFanOutputSensitivity);
> >>>>> +			}
> >>>>> +			adev->pm.dpm.fan.ucode_fan_control =
> true;
> >>>>> +		}
> >>>>> +	}
> >>>>> +
> >>>>> +	/* clock dependancy tables, shedding tables */
> >>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE4)) {
> >>>>> +		if (power_info-
> >pplib4.usVddcDependencyOnSCLKOffset) {
> >>>>> +			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcDependencyOnSCLKOffset));
> >>>>> +			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddc_dependency_on_sclk,
> >>>>> +
> dep_table);
> >>>>> +			if (ret) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return ret;
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if (power_info-
> >pplib4.usVddciDependencyOnMCLKOffset) {
> >>>>> +			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddciDependencyOnMCLKOffset));
> >>>>> +			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddci_dependency_on_mclk,
> >>>>> +
> dep_table);
> >>>>> +			if (ret) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return ret;
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if (power_info-
> >pplib4.usVddcDependencyOnMCLKOffset) {
> >>>>> +			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcDependencyOnMCLKOffset));
> >>>>> +			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.vddc_dependency_on_mclk,
> >>>>> +
> dep_table);
> >>>>> +			if (ret) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return ret;
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if (power_info-
> >pplib4.usMvddDependencyOnMCLKOffset)
> >>>> {
> >>>>> +			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usMvddDependencyOnMCLKOffset));
> >>>>> +			ret =
> amdgpu_parse_clk_voltage_dep_table(&adev-
> >>>>> pm.dpm.dyn_state.mvdd_dependency_on_mclk,
> >>>>> +
> dep_table);
> >>>>> +			if (ret) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return ret;
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if (power_info-
> >pplib4.usMaxClockVoltageOnDCOffset) {
> >>>>> +			ATOM_PPLIB_Clock_Voltage_Limit_Table
> *clk_v =
> >>>>> +
> 	(ATOM_PPLIB_Clock_Voltage_Limit_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usMaxClockVoltageOnDCOffset));
> >>>>> +			if (clk_v->ucNumEntries) {
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.sclk =
> >>>>> +					le16_to_cpu(clk_v-
> >>>>> entries[0].usSclkLow) |
> >>>>> +					(clk_v->entries[0].ucSclkHigh
> << 16);
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.mclk =
> >>>>> +					le16_to_cpu(clk_v-
> >>>>> entries[0].usMclkLow) |
> >>>>> +					(clk_v-
> >entries[0].ucMclkHigh << 16);
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddc =
> >>>>> +					le16_to_cpu(clk_v-
> >>>>> entries[0].usVddc);
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.max_clock_voltage_on_dc.vddci =
> >>>>> +					le16_to_cpu(clk_v-
> >>>>> entries[0].usVddci);
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if (power_info-
> >pplib4.usVddcPhaseShedLimitsTableOffset)
> >>>> {
> >>>>> +			ATOM_PPLIB_PhaseSheddingLimits_Table
> *psl =
> >>>>> +
> 	(ATOM_PPLIB_PhaseSheddingLimits_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib4.usVddcPhaseShedLimitsTableOffset));
> >>>>> +			ATOM_PPLIB_PhaseSheddingLimits_Record
> *entry;
> >>>>> +
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries =
> >>>>> +				kcalloc(psl->ucNumEntries,
> >>>>> +					sizeof(struct
> >>>> amdgpu_phase_shedding_limits_entry),
> >>>>> +					GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +
> >>>>> +			entry = &psl->entries[0];
> >>>>> +			for (i = 0; i < psl->ucNumEntries; i++) {
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].sclk =
> >>>>> +					le16_to_cpu(entry-
> >usSclkLow) |
> >>>> (entry->ucSclkHigh << 16);
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].mclk =
> >>>>> +					le16_to_cpu(entry-
> >usMclkLow) |
> >>>> (entry->ucMclkHigh << 16);
> >>>>> +				adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.entries[i].voltage =
> >>>>> +					le16_to_cpu(entry-
> >usVoltage);
> >>>>> +				entry =
> >>>> (ATOM_PPLIB_PhaseSheddingLimits_Record *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_PhaseSheddingLimits_Record));
> >>>>> +			}
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.phase_shedding_limits_table.count =
> >>>>> +				psl->ucNumEntries;
> >>>>> +		}
> >>>>> +	}
> >>>>> +
> >>>>> +	/* cac data */
> >>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE5)) {
> >>>>> +		adev->pm.dpm.tdp_limit = le32_to_cpu(power_info-
> >>>>> pplib5.ulTDPLimit);
> >>>>> +		adev->pm.dpm.near_tdp_limit =
> le32_to_cpu(power_info-
> >>>>> pplib5.ulNearTDPLimit);
> >>>>> +		adev->pm.dpm.near_tdp_limit_adjusted = adev-
> >>>>> pm.dpm.near_tdp_limit;
> >>>>> +		adev->pm.dpm.tdp_od_limit =
> le16_to_cpu(power_info-
> >>>>> pplib5.usTDPODLimit);
> >>>>> +		if (adev->pm.dpm.tdp_od_limit)
> >>>>> +			adev->pm.dpm.power_control = true;
> >>>>> +		else
> >>>>> +			adev->pm.dpm.power_control = false;
> >>>>> +		adev->pm.dpm.tdp_adjustment = 0;
> >>>>> +		adev->pm.dpm.sq_ramping_threshold =
> >>>> le32_to_cpu(power_info->pplib5.ulSQRampingThreshold);
> >>>>> +		adev->pm.dpm.cac_leakage =
> le32_to_cpu(power_info-
> >>>>> pplib5.ulCACLeakage);
> >>>>> +		adev->pm.dpm.load_line_slope =
> le16_to_cpu(power_info-
> >>>>> pplib5.usLoadLineSlope);
> >>>>> +		if (power_info->pplib5.usCACLeakageTableOffset) {
> >>>>> +			ATOM_PPLIB_CAC_Leakage_Table
> *cac_table =
> >>>>> +				(ATOM_PPLIB_CAC_Leakage_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(power_info-
> >>>>> pplib5.usCACLeakageTableOffset));
> >>>>> +			ATOM_PPLIB_CAC_Leakage_Record *entry;
> >>>>> +			u32 size = cac_table->ucNumEntries *
> sizeof(struct
> >>>> amdgpu_cac_leakage_table);
> >>>>> +			adev-
> >pm.dpm.dyn_state.cac_leakage_table.entries
> >>>> = kzalloc(size, GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			entry = &cac_table->entries[0];
> >>>>> +			for (i = 0; i < cac_table->ucNumEntries; i++) {
> >>>>> +				if (adev->pm.dpm.platform_caps &
> >>>> ATOM_PP_PLATFORM_CAP_EVV) {
> >>>>> +					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc1 =
> >>>>> +						le16_to_cpu(entry-
> >>>>> usVddc1);
> >>>>> +					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc2 =
> >>>>> +						le16_to_cpu(entry-
> >>>>> usVddc2);
> >>>>> +					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc3 =
> >>>>> +						le16_to_cpu(entry-
> >>>>> usVddc3);
> >>>>> +				} else {
> >>>>> +					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].vddc =
> >>>>> +						le16_to_cpu(entry-
> >usVddc);
> >>>>> +					adev-
> >>>>> pm.dpm.dyn_state.cac_leakage_table.entries[i].leakage =
> >>>>> +						le32_to_cpu(entry-
> >>>>> ulLeakageValue);
> >>>>> +				}
> >>>>> +				entry =
> (ATOM_PPLIB_CAC_Leakage_Record
> >>>> *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_CAC_Leakage_Record));
> >>>>> +			}
> >>>>> +			adev-
> >pm.dpm.dyn_state.cac_leakage_table.count
> >>>> = cac_table->ucNumEntries;
> >>>>> +		}
> >>>>> +	}
> >>>>> +
> >>>>> +	/* ext tables */
> >>>>> +	if (le16_to_cpu(power_info->pplib.usTableSize) >=
> >>>>> +	    sizeof(struct _ATOM_PPLIB_POWERPLAYTABLE3)) {
> >>>>> +		ATOM_PPLIB_EXTENDEDHEADER *ext_hdr =
> >>>> (ATOM_PPLIB_EXTENDEDHEADER *)
> >>>>> +			(mode_info->atom_context->bios +
> data_offset +
> >>>>> +			 le16_to_cpu(power_info-
> >>>>> pplib3.usExtendendedHeaderOffset));
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V2) &&
> >>>>> +			ext_hdr->usVCETableOffset) {
> >>>>> +			VCEClockInfoArray *array =
> (VCEClockInfoArray *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1);
> >>>>> +
> 	ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table
> >>>> *limits =
> >>>>> +
> >>>> 	(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1 +
> >>>>> +				 1 + array->ucNumEntries *
> >>>> sizeof(VCEClockInfo));
> >>>>> +			ATOM_PPLIB_VCE_State_Table *states =
> >>>>> +				(ATOM_PPLIB_VCE_State_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usVCETableOffset) +
> >>>> 1 +
> >>>>> +				 1 + (array->ucNumEntries * sizeof
> >>>> (VCEClockInfo)) +
> >>>>> +				 1 + (limits->numEntries *
> >>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record)));
> >>>>> +
> 	ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record
> >>>> *entry;
> >>>>> +			ATOM_PPLIB_VCE_State_Record
> *state_entry;
> >>>>> +			VCEClockInfo *vce_clk;
> >>>>> +			u32 size = limits->numEntries *
> >>>>> +				sizeof(struct
> >>>> amdgpu_vce_clock_voltage_dependency_entry);
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
> >>>>> +				kzalloc(size, GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
> >>>>> +				limits->numEntries;
> >>>>> +			entry = &limits->entries[0];
> >>>>> +			state_entry = &states->entries[0];
> >>>>> +			for (i = 0; i < limits->numEntries; i++) {
> >>>>> +				vce_clk = (VCEClockInfo *)
> >>>>> +					((u8 *)&array->entries[0] +
> >>>>> +					 (entry-
> >ucVCEClockInfoIndex *
> >>>> sizeof(VCEClockInfo)));
> >>>>> +				adev-
> >>>>>
> >>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].evclk
> >>>> =
> >>>>> +					le16_to_cpu(vce_clk-
> >usEVClkLow) |
> >>>> (vce_clk->ucEVClkHigh << 16);
> >>>>> +				adev-
> >>>>>
> >>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].ecclk
> >>>> =
> >>>>> +					le16_to_cpu(vce_clk-
> >usECClkLow) |
> >>>> (vce_clk->ucECClkHigh << 16);
> >>>>> +				adev-
> >>>>>
> pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries[i].v =
> >>>>> +					le16_to_cpu(entry-
> >usVoltage);
> >>>>> +				entry =
> >>>> (ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record));
> >>>>> +			}
> >>>>> +			adev->pm.dpm.num_of_vce_states =
> >>>>> +					states->numEntries >
> >>>> AMD_MAX_VCE_LEVELS ?
> >>>>> +					AMD_MAX_VCE_LEVELS :
> states-
> >>>>> numEntries;
> >>>>> +			for (i = 0; i < adev-
> >pm.dpm.num_of_vce_states; i++)
> >>>> {
> >>>>> +				vce_clk = (VCEClockInfo *)
> >>>>> +					((u8 *)&array->entries[0] +
> >>>>> +					 (state_entry-
> >ucVCEClockInfoIndex
> >>>> * sizeof(VCEClockInfo)));
> >>>>> +				adev->pm.dpm.vce_states[i].evclk =
> >>>>> +					le16_to_cpu(vce_clk-
> >usEVClkLow) |
> >>>> (vce_clk->ucEVClkHigh << 16);
> >>>>> +				adev->pm.dpm.vce_states[i].ecclk =
> >>>>> +					le16_to_cpu(vce_clk-
> >usECClkLow) |
> >>>> (vce_clk->ucECClkHigh << 16);
> >>>>> +				adev->pm.dpm.vce_states[i].clk_idx
> =
> >>>>> +					state_entry-
> >ucClockInfoIndex &
> >>>> 0x3f;
> >>>>> +				adev->pm.dpm.vce_states[i].pstate
> =
> >>>>> +					(state_entry-
> >ucClockInfoIndex &
> >>>> 0xc0) >> 6;
> >>>>> +				state_entry =
> >>>> (ATOM_PPLIB_VCE_State_Record *)
> >>>>> +					((u8 *)state_entry +
> >>>> sizeof(ATOM_PPLIB_VCE_State_Record));
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V3) &&
> >>>>> +			ext_hdr->usUVDTableOffset) {
> >>>>> +			UVDClockInfoArray *array =
> (UVDClockInfoArray *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usUVDTableOffset) +
> >>>> 1);
> >>>>> +
> 	ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table
> >>>> *limits =
> >>>>> +
> >>>> 	(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usUVDTableOffset) +
> >>>> 1 +
> >>>>> +				 1 + (array->ucNumEntries * sizeof
> >>>> (UVDClockInfo)));
> >>>>> +
> 	ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record
> >>>> *entry;
> >>>>> +			u32 size = limits->numEntries *
> >>>>> +				sizeof(struct
> >>>> amdgpu_uvd_clock_voltage_dependency_entry);
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
> >>>>> +				kzalloc(size, GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
> >>>>> +				limits->numEntries;
> >>>>> +			entry = &limits->entries[0];
> >>>>> +			for (i = 0; i < limits->numEntries; i++) {
> >>>>> +				UVDClockInfo *uvd_clk =
> (UVDClockInfo *)
> >>>>> +					((u8 *)&array->entries[0] +
> >>>>> +					 (entry-
> >ucUVDClockInfoIndex *
> >>>> sizeof(UVDClockInfo)));
> >>>>> +				adev-
> >>>>>
> >> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].vclk
> =
> >>>>> +					le16_to_cpu(uvd_clk-
> >usVClkLow) |
> >>>> (uvd_clk->ucVClkHigh << 16);
> >>>>> +				adev-
> >>>>>
> >> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk
> =
> >>>>> +					le16_to_cpu(uvd_clk-
> >usDClkLow) |
> >>>> (uvd_clk->ucDClkHigh << 16);
> >>>>> +				adev-
> >>>>>
> pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
> >>>>> +					le16_to_cpu(entry-
> >usVoltage);
> >>>>> +				entry =
> >>>> (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V4) &&
> >>>>> +			ext_hdr->usSAMUTableOffset) {
> >>>>> +			ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> *limits =
> >>>>> +
> 	(ATOM_PPLIB_SAMClk_Voltage_Limit_Table
> >>>> *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usSAMUTableOffset)
> >>>> + 1);
> >>>>> +
> 	ATOM_PPLIB_SAMClk_Voltage_Limit_Record *entry;
> >>>>> +			u32 size = limits->numEntries *
> >>>>> +				sizeof(struct
> >>>> amdgpu_clock_voltage_dependency_entry);
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries
> =
> >>>>> +				kzalloc(size, GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries)
> {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
> >>>>> +				limits->numEntries;
> >>>>> +			entry = &limits->entries[0];
> >>>>> +			for (i = 0; i < limits->numEntries; i++) {
> >>>>> +				adev-
> >>>>>
> >> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].clk
> =
> >>>>> +					le16_to_cpu(entry-
> >usSAMClockLow)
> >>>> | (entry->ucSAMClockHigh << 16);
> >>>>> +				adev-
> >>>>>
> pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries[i].v
> >> =
> >>>>> +					le16_to_cpu(entry-
> >usVoltage);
> >>>>> +				entry =
> >>>> (ATOM_PPLIB_SAMClk_Voltage_Limit_Record *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_SAMClk_Voltage_Limit_Record));
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V5) &&
> >>>>> +		    ext_hdr->usPPMTableOffset) {
> >>>>> +			ATOM_PPLIB_PPM_Table *ppm =
> >>>> (ATOM_PPLIB_PPM_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usPPMTableOffset));
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table =
> >>>>> +				kzalloc(sizeof(struct
> amdgpu_ppm_table),
> >>>> GFP_KERNEL);
> >>>>> +			if (!adev->pm.dpm.dyn_state.ppm_table) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >ppm_design
> >>>> = ppm->ucPpmDesign;
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> cpu_core_number =
> >>>>> +				le16_to_cpu(ppm-
> >usCpuCoreNumber);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> platform_tdp =
> >>>>> +				le32_to_cpu(ppm->ulPlatformTDP);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> small_ac_platform_tdp =
> >>>>> +				le32_to_cpu(ppm-
> >ulSmallACPlatformTDP);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >platform_tdc
> >>>> =
> >>>>> +				le32_to_cpu(ppm->ulPlatformTDC);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> small_ac_platform_tdc =
> >>>>> +				le32_to_cpu(ppm-
> >ulSmallACPlatformTDC);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >apu_tdp =
> >>>>> +				le32_to_cpu(ppm->ulApuTDP);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >dgpu_tdp =
> >>>>> +				le32_to_cpu(ppm->ulDGpuTDP);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >>>>> dgpu_ulv_power =
> >>>>> +				le32_to_cpu(ppm-
> >ulDGpuUlvPower);
> >>>>> +			adev->pm.dpm.dyn_state.ppm_table-
> >tj_max =
> >>>>> +				le32_to_cpu(ppm->ulTjmax);
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V6) &&
> >>>>> +			ext_hdr->usACPTableOffset) {
> >>>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> *limits =
> >>>>> +
> 	(ATOM_PPLIB_ACPClk_Voltage_Limit_Table
> >>>> *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >usACPTableOffset) +
> >>>> 1);
> >>>>> +			ATOM_PPLIB_ACPClk_Voltage_Limit_Record
> *entry;
> >>>>> +			u32 size = limits->numEntries *
> >>>>> +				sizeof(struct
> >>>> amdgpu_clock_voltage_dependency_entry);
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
> >>>>> +				kzalloc(size, GFP_KERNEL);
> >>>>> +			if (!adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			adev-
> >>>>> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
> >>>>> +				limits->numEntries;
> >>>>> +			entry = &limits->entries[0];
> >>>>> +			for (i = 0; i < limits->numEntries; i++) {
> >>>>> +				adev-
> >>>>>
> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].clk
> >> =
> >>>>> +					le16_to_cpu(entry-
> >usACPClockLow)
> >>>> | (entry->ucACPClockHigh << 16);
> >>>>> +				adev-
> >>>>>
> pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries[i].v =
> >>>>> +					le16_to_cpu(entry-
> >usVoltage);
> >>>>> +				entry =
> >>>> (ATOM_PPLIB_ACPClk_Voltage_Limit_Record *)
> >>>>> +					((u8 *)entry +
> >>>> sizeof(ATOM_PPLIB_ACPClk_Voltage_Limit_Record));
> >>>>> +			}
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V7) &&
> >>>>> +			ext_hdr->usPowerTuneTableOffset) {
> >>>>> +			u8 rev = *(u8 *)(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> +					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> +			ATOM_PowerTune_Table *pt;
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table =
> >>>>> +				kzalloc(sizeof(struct
> amdgpu_cac_tdp_table),
> >>>> GFP_KERNEL);
> >>>>> +			if (!adev->pm.dpm.dyn_state.cac_tdp_table)
> {
> >>>>> +
> >>>> 	amdgpu_free_extended_power_table(adev);
> >>>>> +				return -ENOMEM;
> >>>>> +			}
> >>>>> +			if (rev > 0) {
> >>>>> +
> 	ATOM_PPLIB_POWERTUNE_Table_V1 *ppt =
> >>>> (ATOM_PPLIB_POWERTUNE_Table_V1 *)
> >>>>> +					(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> +					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> +				adev-
> >pm.dpm.dyn_state.cac_tdp_table-
> >>>>> maximum_power_delivery_limit =
> >>>>> +					ppt-
> >usMaximumPowerDeliveryLimit;
> >>>>> +				pt = &ppt->power_tune_table;
> >>>>> +			} else {
> >>>>> +				ATOM_PPLIB_POWERTUNE_Table
> *ppt =
> >>>> (ATOM_PPLIB_POWERTUNE_Table *)
> >>>>> +					(mode_info->atom_context-
> >bios +
> >>>> data_offset +
> >>>>> +					 le16_to_cpu(ext_hdr-
> >>>>> usPowerTuneTableOffset));
> >>>>> +				adev-
> >pm.dpm.dyn_state.cac_tdp_table-
> >>>>> maximum_power_delivery_limit = 255;
> >>>>> +				pt = &ppt->power_tune_table;
> >>>>> +			}
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >tdp =
> >>>> le16_to_cpu(pt->usTDP);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> configurable_tdp =
> >>>>> +				le16_to_cpu(pt->usConfigurableTDP);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >tdc =
> >>>> le16_to_cpu(pt->usTDC);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> battery_power_limit =
> >>>>> +				le16_to_cpu(pt-
> >usBatteryPowerLimit);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> small_power_limit =
> >>>>> +				le16_to_cpu(pt->usSmallPowerLimit);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> low_cac_leakage =
> >>>>> +				le16_to_cpu(pt->usLowCACLeakage);
> >>>>> +			adev->pm.dpm.dyn_state.cac_tdp_table-
> >>>>> high_cac_leakage =
> >>>>> +				le16_to_cpu(pt->usHighCACLeakage);
> >>>>> +		}
> >>>>> +		if ((le16_to_cpu(ext_hdr->usSize) >=
> >>>> SIZE_OF_ATOM_PPLIB_EXTENDEDHEADER_V8) &&
> >>>>> +				ext_hdr->usSclkVddgfxTableOffset) {
> >>>>> +			dep_table =
> >>>> (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
> >>>>> +				(mode_info->atom_context->bios +
> >>>> data_offset +
> >>>>> +				 le16_to_cpu(ext_hdr-
> >>>>> usSclkVddgfxTableOffset));
> >>>>> +			ret = amdgpu_parse_clk_voltage_dep_table(
> >>>>> +					&adev-
> >>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
> >>>>> +					dep_table);
> >>>>> +			if (ret) {
> >>>>> +				kfree(adev-
> >>>>> pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
> >>>>> +				return ret;
> >>>>> +			}
> >>>>> +		}
> >>>>> +	}
> >>>>> +
> >>>>> +	return 0;
> >>>>> +}
> >>>>> +
> >>>>> +void amdgpu_free_extended_power_table(struct amdgpu_device
> >> *adev)
> >>>>> +{
> >>>>> +	struct amdgpu_dpm_dynamic_state *dyn_state = &adev-
> >>>>> pm.dpm.dyn_state;
> >>>>> +
> >>>>> +	kfree(dyn_state->vddc_dependency_on_sclk.entries);
> >>>>> +	kfree(dyn_state->vddci_dependency_on_mclk.entries);
> >>>>> +	kfree(dyn_state->vddc_dependency_on_mclk.entries);
> >>>>> +	kfree(dyn_state->mvdd_dependency_on_mclk.entries);
> >>>>> +	kfree(dyn_state->cac_leakage_table.entries);
> >>>>> +	kfree(dyn_state->phase_shedding_limits_table.entries);
> >>>>> +	kfree(dyn_state->ppm_table);
> >>>>> +	kfree(dyn_state->cac_tdp_table);
> >>>>> +	kfree(dyn_state-
> >vce_clock_voltage_dependency_table.entries);
> >>>>> +	kfree(dyn_state-
> >uvd_clock_voltage_dependency_table.entries);
> >>>>> +	kfree(dyn_state-
> >samu_clock_voltage_dependency_table.entries);
> >>>>> +	kfree(dyn_state-
> >acp_clock_voltage_dependency_table.entries);
> >>>>> +	kfree(dyn_state->vddgfx_dependency_on_sclk.entries);
> >>>>> +}
> >>>>> +
> >>>>> +static const char *pp_lib_thermal_controller_names[] = {
> >>>>> +	"NONE",
> >>>>> +	"lm63",
> >>>>> +	"adm1032",
> >>>>> +	"adm1030",
> >>>>> +	"max6649",
> >>>>> +	"lm64",
> >>>>> +	"f75375",
> >>>>> +	"RV6xx",
> >>>>> +	"RV770",
> >>>>> +	"adt7473",
> >>>>> +	"NONE",
> >>>>> +	"External GPIO",
> >>>>> +	"Evergreen",
> >>>>> +	"emc2103",
> >>>>> +	"Sumo",
> >>>>> +	"Northern Islands",
> >>>>> +	"Southern Islands",
> >>>>> +	"lm96163",
> >>>>> +	"Sea Islands",
> >>>>> +	"Kaveri/Kabini",
> >>>>> +};
> >>>>> +
> >>>>> +void amdgpu_add_thermal_controller(struct amdgpu_device *adev)
> >>>>> +{
> >>>>> +	struct amdgpu_mode_info *mode_info = &adev-
> >mode_info;
> >>>>> +	ATOM_PPLIB_POWERPLAYTABLE *power_table;
> >>>>> +	int index = GetIndexIntoMasterTable(DATA, PowerPlayInfo);
> >>>>> +	ATOM_PPLIB_THERMALCONTROLLER *controller;
> >>>>> +	struct amdgpu_i2c_bus_rec i2c_bus;
> >>>>> +	u16 data_offset;
> >>>>> +	u8 frev, crev;
> >>>>> +
> >>>>> +	if (!amdgpu_atom_parse_data_header(mode_info-
> >atom_context,
> >>>> index, NULL,
> >>>>> +				   &frev, &crev, &data_offset))
> >>>>> +		return;
> >>>>> +	power_table = (ATOM_PPLIB_POWERPLAYTABLE *)
> >>>>> +		(mode_info->atom_context->bios + data_offset);
> >>>>> +	controller = &power_table->sThermalController;
> >>>>> +
> >>>>> +	/* add the i2c bus for thermal/fan chip */
> >>>>> +	if (controller->ucType > 0) {
> >>>>> +		if (controller->ucFanParameters &
> >>>> ATOM_PP_FANPARAMETERS_NOFAN)
> >>>>> +			adev->pm.no_fan = true;
> >>>>> +		adev->pm.fan_pulses_per_revolution =
> >>>>> +			controller->ucFanParameters &
> >>>>
> >>
> ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_M
> >>>> ASK;
> >>>>> +		if (adev->pm.fan_pulses_per_revolution) {
> >>>>> +			adev->pm.fan_min_rpm = controller-
> >ucFanMinRPM;
> >>>>> +			adev->pm.fan_max_rpm = controller-
> >>>>> ucFanMaxRPM;
> >>>>> +		}
> >>>>> +		if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_RV6xx) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_RV6XX;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_RV770) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_RV770;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_EVERGREEN) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EVERGREEN;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_SUMO) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_SUMO;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_NISLANDS) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> THERMAL_TYPE_NI;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_SISLANDS) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> THERMAL_TYPE_SI;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_CISLANDS) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> THERMAL_TYPE_CI;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_KAVERI) {
> >>>>> +			DRM_INFO("Internal thermal controller %s
> fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> THERMAL_TYPE_KV;
> >>>>> +		} else if (controller->ucType ==
> >>>> ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) {
> >>>>> +			DRM_INFO("External GPIO thermal
> controller %s fan
> >>>> control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EXTERNAL_GPIO;
> >>>>> +		} else if (controller->ucType ==
> >>>>> +
> >>>> ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL) {
> >>>>> +			DRM_INFO("ADT7473 with internal thermal
> >>>> controller %s fan control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_ADT7473_WITH_INTERNAL;
> >>>>> +		} else if (controller->ucType ==
> >>>>> +
> >>>> ATOM_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL) {
> >>>>> +			DRM_INFO("EMC2103 with internal thermal
> >>>> controller %s fan control\n",
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EMC2103_WITH_INTERNAL;
> >>>>> +		} else if (controller->ucType <
> >>>> ARRAY_SIZE(pp_lib_thermal_controller_names)) {
> >>>>> +			DRM_INFO("Possible %s thermal controller at
> >>>> 0x%02x %s fan control\n",
> >>>>> +
> >>>> pp_lib_thermal_controller_names[controller->ucType],
> >>>>> +				 controller->ucI2cAddress >> 1,
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +			adev->pm.int_thermal_type =
> >>>> THERMAL_TYPE_EXTERNAL;
> >>>>> +			i2c_bus =
> amdgpu_atombios_lookup_i2c_gpio(adev,
> >>>> controller->ucI2cLine);
> >>>>> +			adev->pm.i2c_bus =
> amdgpu_i2c_lookup(adev,
> >>>> &i2c_bus);
> >>>>> +			if (adev->pm.i2c_bus) {
> >>>>> +				struct i2c_board_info info = { };
> >>>>> +				const char *name =
> >>>> pp_lib_thermal_controller_names[controller->ucType];
> >>>>> +				info.addr = controller-
> >ucI2cAddress >> 1;
> >>>>> +				strlcpy(info.type, name,
> sizeof(info.type));
> >>>>> +				i2c_new_client_device(&adev-
> >pm.i2c_bus-
> >>>>> adapter, &info);
> >>>>> +			}
> >>>>> +		} else {
> >>>>> +			DRM_INFO("Unknown thermal controller
> type %d at
> >>>> 0x%02x %s fan control\n",
> >>>>> +				 controller->ucType,
> >>>>> +				 controller->ucI2cAddress >> 1,
> >>>>> +				 (controller->ucFanParameters &
> >>>>> +
> ATOM_PP_FANPARAMETERS_NOFAN) ?
> >>>> "without" : "with");
> >>>>> +		}
> >>>>> +	}
> >>>>> +}
> >>>>> +
> >>>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
> >> u32
> >>>> idx)
> >>>>> +{
> >>>>> +	struct amdgpu_device *adev = (struct amdgpu_device
> *)handle;
> >>>>> +
> >>>>> +	if (idx < adev->pm.dpm.num_of_vce_states)
> >>>>> +		return &adev->pm.dpm.vce_states[idx];
> >>>>> +
> >>>>> +	return NULL;
> >>>>> +}
> >>>>> +
> >>>>> +static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct
> >>>> amdgpu_device *adev,
> >>>>> +						     enum
> >>>> amd_pm_state_type dpm_state)
> >>>>> +{
> >>>>> +	int i;
> >>>>> +	struct amdgpu_ps *ps;
> >>>>> +	u32 ui_class;
> >>>>> +	bool single_display = (adev-
> >pm.dpm.new_active_crtc_count < 2) ?
> >>>>> +		true : false;
> >>>>> +
> >>>>> +	/* check if the vblank period is too short to adjust the mclk */
> >>>>> +	if (single_display && adev->powerplay.pp_funcs-
> >vblank_too_short)
> >>>> {
> >>>>> +		if (amdgpu_dpm_vblank_too_short(adev))
> >>>>> +			single_display = false;
> >>>>> +	}
> >>>>> +
> >>>>> +	/* certain older asics have a separare 3D performance state,
> >>>>> +	 * so try that first if the user selected performance
> >>>>> +	 */
> >>>>> +	if (dpm_state == POWER_STATE_TYPE_PERFORMANCE)
> >>>>> +		dpm_state =
> POWER_STATE_TYPE_INTERNAL_3DPERF;
> >>>>> +	/* balanced states don't exist at the moment */
> >>>>> +	if (dpm_state == POWER_STATE_TYPE_BALANCED)
> >>>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>>>> +
> >>>>> +restart_search:
> >>>>> +	/* Pick the best power state based on current conditions */
> >>>>> +	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
> >>>>> +		ps = &adev->pm.dpm.ps[i];
> >>>>> +		ui_class = ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_MASK;
> >>>>> +		switch (dpm_state) {
> >>>>> +		/* user states */
> >>>>> +		case POWER_STATE_TYPE_BATTERY:
> >>>>> +			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_BATTERY) {
> >>>>> +				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> +					if (single_display)
> >>>>> +						return ps;
> >>>>> +				} else
> >>>>> +					return ps;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_BALANCED:
> >>>>> +			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_BALANCED) {
> >>>>> +				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> +					if (single_display)
> >>>>> +						return ps;
> >>>>> +				} else
> >>>>> +					return ps;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_PERFORMANCE:
> >>>>> +			if (ui_class ==
> >>>> ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE) {
> >>>>> +				if (ps->caps &
> >>>> ATOM_PPLIB_SINGLE_DISPLAY_ONLY) {
> >>>>> +					if (single_display)
> >>>>> +						return ps;
> >>>>> +				} else
> >>>>> +					return ps;
> >>>>> +			}
> >>>>> +			break;
> >>>>> +		/* internal states */
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD:
> >>>>> +			if (adev->pm.dpm.uvd_ps)
> >>>>> +				return adev->pm.dpm.uvd_ps;
> >>>>> +			else
> >>>>> +				break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>>>> +			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_SDSTATE)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>>>> +			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_HDSTATE)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>>>> +			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_HD2STATE)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>>>> +			if (ps->class2 &
> >>>> ATOM_PPLIB_CLASSIFICATION2_MVC)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_BOOT:
> >>>>> +			return adev->pm.dpm.boot_ps;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>>>> +			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_THERMAL)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>>>> +			if (ps->class &
> ATOM_PPLIB_CLASSIFICATION_ACPI)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_ULV:
> >>>>> +			if (ps->class2 &
> ATOM_PPLIB_CLASSIFICATION2_ULV)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>>>> +			if (ps->class &
> >>>> ATOM_PPLIB_CLASSIFICATION_3DPERFORMANCE)
> >>>>> +				return ps;
> >>>>> +			break;
> >>>>> +		default:
> >>>>> +			break;
> >>>>> +		}
> >>>>> +	}
> >>>>> +	/* use a fallback state if we didn't match */
> >>>>> +	switch (dpm_state) {
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_SD:
> >>>>> +		dpm_state =
> POWER_STATE_TYPE_INTERNAL_UVD_HD;
> >>>>> +		goto restart_search;
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD:
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_HD2:
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_UVD_MVC:
> >>>>> +		if (adev->pm.dpm.uvd_ps) {
> >>>>> +			return adev->pm.dpm.uvd_ps;
> >>>>> +		} else {
> >>>>> +			dpm_state =
> POWER_STATE_TYPE_PERFORMANCE;
> >>>>> +			goto restart_search;
> >>>>> +		}
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_THERMAL:
> >>>>> +		dpm_state = POWER_STATE_TYPE_INTERNAL_ACPI;
> >>>>> +		goto restart_search;
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_ACPI:
> >>>>> +		dpm_state = POWER_STATE_TYPE_BATTERY;
> >>>>> +		goto restart_search;
> >>>>> +	case POWER_STATE_TYPE_BATTERY:
> >>>>> +	case POWER_STATE_TYPE_BALANCED:
> >>>>> +	case POWER_STATE_TYPE_INTERNAL_3DPERF:
> >>>>> +		dpm_state = POWER_STATE_TYPE_PERFORMANCE;
> >>>>> +		goto restart_search;
> >>>>> +	default:
> >>>>> +		break;
> >>>>> +	}
> >>>>> +
> >>>>> +	return NULL;
> >>>>> +}
> >>>>> +
> >>>>> +int amdgpu_dpm_change_power_state_locked(void *handle)
> >>>>> +{
> >>>>> +	struct amdgpu_device *adev = (struct amdgpu_device
> *)handle;
> >>>>> +	struct amdgpu_ps *ps;
> >>>>> +	enum amd_pm_state_type dpm_state;
> >>>>> +	int ret;
> >>>>> +	bool equal = false;
> >>>>> +
> >>>>> +	/* if dpm init failed */
> >>>>> +	if (!adev->pm.dpm_enabled)
> >>>>> +		return 0;
> >>>>> +
> >>>>> +	if (adev->pm.dpm.user_state != adev->pm.dpm.state) {
> >>>>> +		/* add other state override checks here */
> >>>>> +		if ((!adev->pm.dpm.thermal_active) &&
> >>>>> +		    (!adev->pm.dpm.uvd_active))
> >>>>> +			adev->pm.dpm.state = adev-
> >pm.dpm.user_state;
> >>>>> +	}
> >>>>> +	dpm_state = adev->pm.dpm.state;
> >>>>> +
> >>>>> +	ps = amdgpu_dpm_pick_power_state(adev, dpm_state);
> >>>>> +	if (ps)
> >>>>> +		adev->pm.dpm.requested_ps = ps;
> >>>>> +	else
> >>>>> +		return -EINVAL;
> >>>>> +
> >>>>> +	if (amdgpu_dpm == 1 && adev->powerplay.pp_funcs-
> >>>>> print_power_state) {
> >>>>> +		printk("switching from power state:\n");
> >>>>> +		amdgpu_dpm_print_power_state(adev, adev-
> >>>>> pm.dpm.current_ps);
> >>>>> +		printk("switching to power state:\n");
> >>>>> +		amdgpu_dpm_print_power_state(adev, adev-
> >>>>> pm.dpm.requested_ps);
> >>>>> +	}
> >>>>> +
> >>>>> +	/* update whether vce is active */
> >>>>> +	ps->vce_active = adev->pm.dpm.vce_active;
> >>>>> +	if (adev->powerplay.pp_funcs-
> >display_configuration_changed)
> >>>>> +		amdgpu_dpm_display_configuration_changed(adev);
> >>>>> +
> >>>>> +	ret = amdgpu_dpm_pre_set_power_state(adev);
> >>>>> +	if (ret)
> >>>>> +		return ret;
> >>>>> +
> >>>>> +	if (adev->powerplay.pp_funcs->check_state_equal) {
> >>>>> +		if (0 != amdgpu_dpm_check_state_equal(adev,
> adev-
> >>>>> pm.dpm.current_ps, adev->pm.dpm.requested_ps, &equal))
> >>>>> +			equal = false;
> >>>>> +	}
> >>>>> +
> >>>>> +	if (equal)
> >>>>> +		return 0;
> >>>>> +
> >>>>> +	if (adev->powerplay.pp_funcs->set_power_state)
> >>>>> +		adev->powerplay.pp_funcs-
> >set_power_state(adev-
> >>>>> powerplay.pp_handle);
> >>>>> +
> >>>>> +	amdgpu_dpm_post_set_power_state(adev);
> >>>>> +
> >>>>> +	adev->pm.dpm.current_active_crtcs = adev-
> >>>>> pm.dpm.new_active_crtcs;
> >>>>> +	adev->pm.dpm.current_active_crtc_count = adev-
> >>>>> pm.dpm.new_active_crtc_count;
> >>>>> +
> >>>>> +	if (adev->powerplay.pp_funcs->force_performance_level) {
> >>>>> +		if (adev->pm.dpm.thermal_active) {
> >>>>> +			enum amd_dpm_forced_level level = adev-
> >>>>> pm.dpm.forced_level;
> >>>>> +			/* force low perf level for thermal */
> >>>>> +
> 	amdgpu_dpm_force_performance_level(adev,
> >>>> AMD_DPM_FORCED_LEVEL_LOW);
> >>>>> +			/* save the user's level */
> >>>>> +			adev->pm.dpm.forced_level = level;
> >>>>> +		} else {
> >>>>> +			/* otherwise, user selected level */
> >>>>> +
> 	amdgpu_dpm_force_performance_level(adev,
> >>>> adev->pm.dpm.forced_level);
> >>>>> +		}
> >>>>> +	}
> >>>>> +
> >>>>> +	return 0;
> >>>>> +}
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>>> b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>>>> new file mode 100644
> >>>>> index 000000000000..4adc765c8824
> >>>>> --- /dev/null
> >>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/legacy_dpm.h
> >>>>> @@ -0,0 +1,70 @@
> >>>>> +/*
> >>>>> + * Copyright 2021 Advanced Micro Devices, Inc.
> >>>>> + *
> >>>>> + * Permission is hereby granted, free of charge, to any person
> obtaining
> >> a
> >>>>> + * copy of this software and associated documentation files (the
> >>>> "Software"),
> >>>>> + * to deal in the Software without restriction, including without
> >> limitation
> >>>>> + * the rights to use, copy, modify, merge, publish, distribute,
> sublicense,
> >>>>> + * and/or sell copies of the Software, and to permit persons to
> whom
> >> the
> >>>>> + * Software is furnished to do so, subject to the following conditions:
> >>>>> + *
> >>>>> + * The above copyright notice and this permission notice shall be
> >> included
> >>>> in
> >>>>> + * all copies or substantial portions of the Software.
> >>>>> + *
> >>>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
> ANY
> >>>> KIND, EXPRESS OR
> >>>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> >>>> MERCHANTABILITY,
> >>>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> IN
> >>>> NO EVENT SHALL
> >>>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY
> CLAIM,
> >>>> DAMAGES OR
> >>>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT
> OR
> >>>> OTHERWISE,
> >>>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
> >> OR
> >>>> THE USE OR
> >>>>> + * OTHER DEALINGS IN THE SOFTWARE.
> >>>>> + *
> >>>>> + */
> >>>>> +#ifndef __LEGACY_DPM_H__
> >>>>> +#define __LEGACY_DPM_H__
> >>>>> +
> >>>>> +int amdgpu_atombios_get_memory_pll_dividers(struct
> >> amdgpu_device
> >>>> *adev,
> >>>>> +					    u32 clock,
> >>>>> +					    bool strobe_mode,
> >>>>> +					    struct atom_mpll_param
> >>>> *mpll_param);
> >>>>> +
> >>>>> +void amdgpu_atombios_set_engine_dram_timings(struct
> >>>> amdgpu_device *adev,
> >>>>> +					     u32 eng_clock, u32
> mem_clock);
> >>>>> +
> >>>>> +void amdgpu_atombios_get_default_voltages(struct
> amdgpu_device
> >>>> *adev,
> >>>>> +					  u16 *vddc, u16 *vddci, u16
> *mvdd);
> >>>>> +
> >>>>> +int amdgpu_atombios_get_max_vddc(struct amdgpu_device *adev,
> >> u8
> >>>> voltage_type,
> >>>>> +			     u16 voltage_id, u16 *voltage);
> >>>>> +
> >>>>> +int
> >> amdgpu_atombios_get_leakage_vddc_based_on_leakage_idx(struct
> >>>> amdgpu_device *adev,
> >>>>> +						      u16 *voltage,
> >>>>> +						      u16 leakage_idx);
> >>>>> +
> >>>>> +int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
> >>>>> +			      u8 voltage_type,
> >>>>> +			      u8 *svd_gpio_id, u8 *svc_gpio_id);
> >>>>> +
> >>>>> +bool
> >>>>> +amdgpu_atombios_is_voltage_gpio(struct amdgpu_device *adev,
> >>>>> +				u8 voltage_type, u8 voltage_mode);
> >>>>> +int amdgpu_atombios_get_voltage_table(struct amdgpu_device
> >> *adev,
> >>>>> +				      u8 voltage_type, u8
> voltage_mode,
> >>>>> +				      struct atom_voltage_table
> >>>> *voltage_table);
> >>>>> +
> >>>>> +int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device
> >> *adev,
> >>>>> +				      u8 module_index,
> >>>>> +				      struct atom_mc_reg_table
> *reg_table);
> >>>>> +
> >>>>> +void amdgpu_dpm_print_class_info(u32 class, u32 class2);
> >>>>> +void amdgpu_dpm_print_cap_info(u32 caps);
> >>>>> +void amdgpu_dpm_print_ps_status(struct amdgpu_device *adev,
> >>>>> +				struct amdgpu_ps *rps);
> >>>>> +int amdgpu_get_platform_caps(struct amdgpu_device *adev);
> >>>>> +int amdgpu_parse_extended_power_table(struct amdgpu_device
> >>>> *adev);
> >>>>> +void amdgpu_free_extended_power_table(struct amdgpu_device
> >>>> *adev);
> >>>>> +void amdgpu_add_thermal_controller(struct amdgpu_device
> *adev);
> >>>>> +struct amd_vce_state* amdgpu_get_vce_clock_state(void *handle,
> >> u32
> >>>> idx);
> >>>>> +int amdgpu_dpm_change_power_state_locked(void *handle);
> >>>>> +void amdgpu_pm_print_power_states(struct amdgpu_device
> *adev);
> >>>>> +#endif
> >>>>> diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>>> b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>>>> index 4f84d8b893f1..a2881c90d187 100644
> >>>>> --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>>>> +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
> >>>>> @@ -37,6 +37,7 @@
> >>>>>     #include <linux/math64.h>
> >>>>>     #include <linux/seq_file.h>
> >>>>>     #include <linux/firmware.h>
> >>>>> +#include <legacy_dpm.h>
> >>>>>
> >>>>>     #define MC_CG_ARB_FREQ_F0           0x0a
> >>>>>     #define MC_CG_ARB_FREQ_F1           0x0b
> >>>>> @@ -8101,6 +8102,7 @@ static const struct amd_pm_funcs
> >> si_dpm_funcs
> >>>> = {
> >>>>>     	.check_state_equal = &si_check_state_equal,
> >>>>>     	.get_vce_clock_state = amdgpu_get_vce_clock_state,
> >>>>>     	.read_sensor = &si_dpm_read_sensor,
> >>>>> +	.change_power_state =
> amdgpu_dpm_change_power_state_locked,
> >>>>>     };
> >>>>>
> >>>>>     static const struct amdgpu_irq_src_funcs si_dpm_irq_funcs = {
> >>>>>

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2021-12-02  1:24 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-30  7:42 [PATCH V2 00/17] Unified entry point for other blocks to interact with power Evan Quan
2021-11-30  7:42 ` [PATCH V2 01/17] drm/amd/pm: do not expose implementation details to other blocks out of power Evan Quan
2021-11-30  8:09   ` Lazar, Lijo
2021-12-01  1:59     ` Quan, Evan
2021-12-01  3:33       ` Lazar, Lijo
2021-12-01  7:07         ` Quan, Evan
2021-12-01  3:37     ` Lazar, Lijo
2021-11-30  7:42 ` [PATCH V2 02/17] drm/amd/pm: do not expose power implementation details to amdgpu_pm.c Evan Quan
2021-11-30 13:04   ` Chen, Guchun
2021-12-01  2:06     ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 03/17] drm/amd/pm: do not expose power implementation details to display Evan Quan
2021-11-30  7:42 ` [PATCH V2 04/17] drm/amd/pm: do not expose those APIs used internally only in amdgpu_dpm.c Evan Quan
2021-11-30  7:42 ` [PATCH V2 05/17] drm/amd/pm: do not expose those APIs used internally only in si_dpm.c Evan Quan
2021-11-30 12:22   ` Lazar, Lijo
2021-12-01  2:07     ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 06/17] drm/amd/pm: do not expose the API used internally only in kv_dpm.c Evan Quan
2021-11-30 12:27   ` Lazar, Lijo
2021-12-01  2:47     ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 07/17] drm/amd/pm: create a new holder for those APIs used only by legacy ASICs(si/kv) Evan Quan
2021-11-30 13:21   ` Lazar, Lijo
2021-12-01  3:13     ` Quan, Evan
2021-12-01  4:19       ` Lazar, Lijo
2021-12-01  7:17         ` Quan, Evan
2021-12-01  7:36           ` Lazar, Lijo
2021-12-02  1:24             ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 08/17] drm/amd/pm: move pp_force_state_enabled member to amdgpu_pm structure Evan Quan
2021-11-30  7:42 ` [PATCH V2 09/17] drm/amd/pm: optimize the amdgpu_pm_compute_clocks() implementations Evan Quan
2021-11-30  7:42 ` [PATCH V2 10/17] drm/amd/pm: move those code piece used by Stoney only to smu8_hwmgr.c Evan Quan
2021-11-30  7:42 ` [PATCH V2 11/17] drm/amd/pm: correct the usage for amdgpu_dpm_dispatch_task() Evan Quan
2021-11-30 13:48   ` Lazar, Lijo
2021-12-01  3:50     ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 12/17] drm/amd/pm: drop redundant or unused APIs and data structures Evan Quan
2021-11-30  7:42 ` [PATCH V2 13/17] drm/amd/pm: do not expose the smu_context structure used internally in power Evan Quan
2021-11-30 13:57   ` Lazar, Lijo
2021-12-01  5:39     ` Quan, Evan
2021-12-01  6:38       ` Lazar, Lijo
2021-12-01  7:24         ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 14/17] drm/amd/pm: relocate the power related headers Evan Quan
2021-11-30 14:07   ` Lazar, Lijo
2021-12-01  6:22     ` Quan, Evan
2021-11-30  7:42 ` [PATCH V2 15/17] drm/amd/pm: drop unnecessary gfxoff controls Evan Quan
2021-11-30  7:42 ` [PATCH V2 16/17] drm/amd/pm: revise the performance level setting APIs Evan Quan
2021-11-30  7:42 ` [PATCH V2 17/17] drm/amd/pm: unified lock protections in amdgpu_dpm.c Evan Quan
2021-11-30  9:58 ` [PATCH V2 00/17] Unified entry point for other blocks to interact with power Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.