All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11.
@ 2017-01-09  6:51 Rex Zhu
       [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Rex Zhu @ 2017-01-09  6:51 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Rex Zhu

following firmware's request.

Change-Id: I0098144cae57727999101152e973338ddffec28e
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 21 ++++++++++++---------
 1 file changed, 12 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
index 2f6225e..2ea9c0e 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
@@ -767,17 +767,10 @@ int phm_get_voltage_evv_on_sclk(struct pp_hwmgr *hwmgr, uint8_t voltage_type,
 
 int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)
 {
-	/* power tune caps Assume disabled */
+
 	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
 						PHM_PlatformCaps_SQRamping);
 	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
-						PHM_PlatformCaps_DBRamping);
-	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
-						PHM_PlatformCaps_TDRamping);
-	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
-						PHM_PlatformCaps_TCPRamping);
-
-	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
 						PHM_PlatformCaps_RegulatorHot);
 
 	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
@@ -786,9 +779,19 @@ int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)
 	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
 				PHM_PlatformCaps_TablelessHardwareInterface);
 
-	if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12))
+
+	if (hwmgr->chip_id != CHIP_POLARIS10)
 		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
 					PHM_PlatformCaps_SPLLShutdownSupport);
+
+	if (hwmgr->chip_id != CHIP_POLARIS11) {
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+							PHM_PlatformCaps_DBRamping);
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+							PHM_PlatformCaps_TDRamping);
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+							PHM_PlatformCaps_TCPRamping);
+	}
 	return 0;
 }
 
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/4] drm/amd/powerplay: add new smu message.
       [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
@ 2017-01-09  6:51   ` Rex Zhu
       [not found]     ` <1483944700-3842-2-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
  2017-01-09  6:51   ` [PATCH 3/4] drm/amd/powerplay: refine DIDT feature in Powerplay Rex Zhu
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Rex Zhu @ 2017-01-09  6:51 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Rex Zhu

Change-Id: I17f02555fbc79a9e5a2e9d3160fddbc6c502eaf4
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h b/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
index bce0009..fbc504c 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
@@ -394,6 +394,9 @@ typedef uint16_t PPSMC_Result;
 
 #define PPSMC_MSG_SetVBITimeout               ((uint16_t) 0x306)
 
+#define PPSMC_MSG_EnableDpmDidt               ((uint16_t) 0x309)
+#define PPSMC_MSG_DisableDpmDidt              ((uint16_t) 0x30A)
+
 #define PPSMC_MSG_SecureSRBMWrite             ((uint16_t) 0x600)
 #define PPSMC_MSG_SecureSRBMRead              ((uint16_t) 0x601)
 #define PPSMC_MSG_SetAddress                  ((uint16_t) 0x800)
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/4] drm/amd/powerplay: refine DIDT feature in Powerplay.
       [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
  2017-01-09  6:51   ` [PATCH 2/4] drm/amd/powerplay: add new smu message Rex Zhu
@ 2017-01-09  6:51   ` Rex Zhu
  2017-01-09  6:51   ` [PATCH 4/4] drm/amdgpu: extend profiling mode Rex Zhu
  2017-01-09 14:11   ` [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11 Deucher, Alexander
  3 siblings, 0 replies; 17+ messages in thread
From: Rex Zhu @ 2017-01-09  6:51 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Rex Zhu

Updating SQ DIDT settings and block mask
so SQ uses PCC on Polaris11.

Change-Id: I80c48de1eab812ff4e201505331c95ddb33b2ad5
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c   |   4 +
 .../gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c   | 224 ++++++++++++++++-----
 2 files changed, 179 insertions(+), 49 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index be6d374..acd038cc 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -1294,6 +1294,10 @@ int smu7_disable_dpm_tasks(struct pp_hwmgr *hwmgr)
 	PP_ASSERT_WITH_CODE((tmp_result == 0),
 			"Failed to disable SMC CAC!", result = tmp_result);
 
+	tmp_result = smu7_disable_didt_config(hwmgr);
+	PP_ASSERT_WITH_CODE((tmp_result == 0),
+			"Failed to disable DIDT!", result = tmp_result);
+
 	PHM_WRITE_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC,
 			CG_SPLL_SPREAD_SPECTRUM, SSEN, 0);
 	PHM_WRITE_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC,
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
index 77d469f..3341c0f 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
@@ -31,6 +31,8 @@
 
 static uint32_t DIDTBlock_Info = SQ_IR_MASK | TCP_IR_MASK | TD_PCC_MASK;
 
+static uint32_t Polaris11_DIDTBlock_Info = SQ_PCC_MASK | TCP_IR_MASK | TD_PCC_MASK;
+
 static const struct gpu_pt_config_reg GCCACConfig_Polaris10[] = {
 /* ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  *      Offset                             Mask                                                Shift                                               Value       Type
@@ -261,9 +263,9 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris11[] = {
 	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__UNUSED_0_MASK,                    DIDT_SQ_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_SQ_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0xffff,     GPU_CONFIGREG_DIDT_IND },
 
-	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_SQ_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3853,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_SQ_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3fff,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_0_MASK,                       DIDT_SQ_CTRL2__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x005a,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x000f,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_1_MASK,                       DIDT_SQ_CTRL2__UNUSED_1__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,       DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_2_MASK,                       DIDT_SQ_CTRL2__UNUSED_2__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
@@ -271,12 +273,12 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris11[] = {
 	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,    DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT,  0x0001,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x0ebb,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x01aa,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__UNUSED_0_MASK,                  DIDT_SQ_STALL_CTRL__UNUSED_0__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
 
-	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x3853,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x3153,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x0dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x0dde,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__UNUSED_0_MASK,                 DIDT_SQ_TUNING_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
 
 	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
@@ -284,8 +286,8 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris11[] = {
 	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__PHASE_OFFSET_MASK,                   DIDT_SQ_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_SQ_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
-	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0008,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0008,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__UNUSED_0_MASK,                       DIDT_SQ_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
 
 	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT0_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT0__SHIFT,                  0x000a,     GPU_CONFIGREG_DIDT_IND },
@@ -373,55 +375,160 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris11[] = {
 	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
 	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__UNUSED_0_MASK,                       DIDT_TCP_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   0xFFFFFFFF  }
+};
+
+static const struct gpu_pt_config_reg DIDTConfig_Polaris12[] = {
+/* ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ *      Offset                             Mask                                                Shift                                               Value       Type
+ * ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ */
+	{   ixDIDT_SQ_WEIGHT0_3,               DIDT_SQ_WEIGHT0_3__WEIGHT0_MASK,                    DIDT_SQ_WEIGHT0_3__WEIGHT0__SHIFT,                  0x0073,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT0_3,               DIDT_SQ_WEIGHT0_3__WEIGHT1_MASK,                    DIDT_SQ_WEIGHT0_3__WEIGHT1__SHIFT,                  0x00ab,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT0_3,               DIDT_SQ_WEIGHT0_3__WEIGHT2_MASK,                    DIDT_SQ_WEIGHT0_3__WEIGHT2__SHIFT,                  0x0084,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT0_3,               DIDT_SQ_WEIGHT0_3__WEIGHT3_MASK,                    DIDT_SQ_WEIGHT0_3__WEIGHT3__SHIFT,                  0x005a,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_SQ_WEIGHT4_7,               DIDT_SQ_WEIGHT4_7__WEIGHT4_MASK,                    DIDT_SQ_WEIGHT4_7__WEIGHT4__SHIFT,                  0x0067,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT4_7,               DIDT_SQ_WEIGHT4_7__WEIGHT5_MASK,                    DIDT_SQ_WEIGHT4_7__WEIGHT5__SHIFT,                  0x0084,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT4_7,               DIDT_SQ_WEIGHT4_7__WEIGHT6_MASK,                    DIDT_SQ_WEIGHT4_7__WEIGHT6__SHIFT,                  0x0027,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT4_7,               DIDT_SQ_WEIGHT4_7__WEIGHT7_MASK,                    DIDT_SQ_WEIGHT4_7__WEIGHT7__SHIFT,                  0x0046,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_SQ_WEIGHT8_11,              DIDT_SQ_WEIGHT8_11__WEIGHT8_MASK,                   DIDT_SQ_WEIGHT8_11__WEIGHT8__SHIFT,                 0x00aa,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT8_11,              DIDT_SQ_WEIGHT8_11__WEIGHT9_MASK,                   DIDT_SQ_WEIGHT8_11__WEIGHT9__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT8_11,              DIDT_SQ_WEIGHT8_11__WEIGHT10_MASK,                  DIDT_SQ_WEIGHT8_11__WEIGHT10__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_WEIGHT8_11,              DIDT_SQ_WEIGHT8_11__WEIGHT11_MASK,                  DIDT_SQ_WEIGHT8_11__WEIGHT11__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_SQ_CTRL1,                   DIDT_SQ_CTRL1__MIN_POWER_MASK,                      DIDT_SQ_CTRL1__MIN_POWER__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_SQ_CTRL1,                   DIDT_SQ_CTRL1__MAX_POWER_MASK,                      DIDT_SQ_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__UNUSED_0_MASK,                    DIDT_SQ_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_TD_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0x00ff,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_TD_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3fff,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__UNUSED_0_MASK,                       DIDT_TD_CTRL2__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_TD_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x000f,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__UNUSED_1_MASK,                       DIDT_TD_CTRL2__UNUSED_1__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,       DIDT_TD_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__UNUSED_2_MASK,                       DIDT_TD_CTRL2__UNUSED_2__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TD_STALL_CTRL,              DIDT_TD_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,    DIDT_TD_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT,  0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_STALL_CTRL,              DIDT_TD_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,       DIDT_TD_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_STALL_CTRL,              DIDT_TD_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,       DIDT_TD_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_STALL_CTRL,              DIDT_TD_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_TD_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x01aa,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_STALL_CTRL,              DIDT_TD_STALL_CTRL__UNUSED_0_MASK,                  DIDT_TD_STALL_CTRL__UNUSED_0__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TD_TUNING_CTRL,             DIDT_TD_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_TD_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_TUNING_CTRL,             DIDT_TD_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_TD_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x0dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_TUNING_CTRL,             DIDT_TD_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_TD_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x0dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_TUNING_CTRL,             DIDT_TD_TUNING_CTRL__UNUSED_0_MASK,                 DIDT_TD_TUNING_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_TD_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__USE_REF_CLOCK_MASK,                  DIDT_TD_CTRL0__USE_REF_CLOCK__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__PHASE_OFFSET_MASK,                   DIDT_TD_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_TD_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_TD_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_TD_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0008,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_TD_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0008,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TD_CTRL0,                   DIDT_TD_CTRL0__UNUSED_0_MASK,                       DIDT_TD_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_WEIGHT0_3,              DIDT_TCP_WEIGHT0_3__WEIGHT0_MASK,                   DIDT_TCP_WEIGHT0_3__WEIGHT0__SHIFT,                 0x0004,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT0_3,              DIDT_TCP_WEIGHT0_3__WEIGHT1_MASK,                   DIDT_TCP_WEIGHT0_3__WEIGHT1__SHIFT,                 0x0037,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT0_3,              DIDT_TCP_WEIGHT0_3__WEIGHT2_MASK,                   DIDT_TCP_WEIGHT0_3__WEIGHT2__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT0_3,              DIDT_TCP_WEIGHT0_3__WEIGHT3_MASK,                   DIDT_TCP_WEIGHT0_3__WEIGHT3__SHIFT,                 0x00ff,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_WEIGHT4_7,              DIDT_TCP_WEIGHT4_7__WEIGHT4_MASK,                   DIDT_TCP_WEIGHT4_7__WEIGHT4__SHIFT,                 0x0054,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT4_7,              DIDT_TCP_WEIGHT4_7__WEIGHT5_MASK,                   DIDT_TCP_WEIGHT4_7__WEIGHT5__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT4_7,              DIDT_TCP_WEIGHT4_7__WEIGHT6_MASK,                   DIDT_TCP_WEIGHT4_7__WEIGHT6__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_WEIGHT4_7,              DIDT_TCP_WEIGHT4_7__WEIGHT7_MASK,                   DIDT_TCP_WEIGHT4_7__WEIGHT7__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_CTRL1,                  DIDT_TCP_CTRL1__MIN_POWER_MASK,                     DIDT_TCP_CTRL1__MIN_POWER__SHIFT,                   0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL1,                  DIDT_TCP_CTRL1__MAX_POWER_MASK,                     DIDT_TCP_CTRL1__MAX_POWER__SHIFT,                   0xffff,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_CTRL_OCP,               DIDT_TCP_CTRL_OCP__UNUSED_0_MASK,                   DIDT_TCP_CTRL_OCP__UNUSED_0__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL_OCP,               DIDT_TCP_CTRL_OCP__OCP_MAX_POWER_MASK,              DIDT_TCP_CTRL_OCP__OCP_MAX_POWER__SHIFT,            0xffff,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__MAX_POWER_DELTA_MASK,               DIDT_TCP_CTRL2__MAX_POWER_DELTA__SHIFT,             0x3dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__UNUSED_0_MASK,                      DIDT_TCP_CTRL2__UNUSED_0__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,      DIDT_TCP_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,    0x0032,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__UNUSED_1_MASK,                      DIDT_TCP_CTRL2__UNUSED_1__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,      DIDT_TCP_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,    0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL2,                  DIDT_TCP_CTRL2__UNUSED_2_MASK,                      DIDT_TCP_CTRL2__UNUSED_2__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_STALL_CTRL,             DIDT_TCP_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,   DIDT_TCP_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT, 0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_STALL_CTRL,             DIDT_TCP_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,      DIDT_TCP_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,    0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_STALL_CTRL,             DIDT_TCP_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,      DIDT_TCP_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,    0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_STALL_CTRL,             DIDT_TCP_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,  DIDT_TCP_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x01aa,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_STALL_CTRL,             DIDT_TCP_STALL_CTRL__UNUSED_0_MASK,                 DIDT_TCP_STALL_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_TUNING_CTRL,            DIDT_TCP_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,      DIDT_TCP_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,    0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_TUNING_CTRL,            DIDT_TCP_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,      DIDT_TCP_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,    0x3dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_TUNING_CTRL,            DIDT_TCP_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,      DIDT_TCP_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,    0x3dde,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_TUNING_CTRL,            DIDT_TCP_TUNING_CTRL__UNUSED_0_MASK,                DIDT_TCP_TUNING_CTRL__UNUSED_0__SHIFT,              0x0000,     GPU_CONFIGREG_DIDT_IND },
+
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_TCP_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__USE_REF_CLOCK_MASK,                  DIDT_TCP_CTRL0__USE_REF_CLOCK__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__PHASE_OFFSET_MASK,                   DIDT_TCP_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_TCP_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_TCP_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_TCP_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
+	{   ixDIDT_TCP_CTRL0,                   DIDT_TCP_CTRL0__UNUSED_0_MASK,                       DIDT_TCP_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
 	{   0xFFFFFFFF  }
 };
 
 
 static int smu7_enable_didt(struct pp_hwmgr *hwmgr, const bool enable)
 {
-
 	uint32_t en = enable ? 1 : 0;
+	uint32_t block_en = 0;
 	int32_t result = 0;
+	uint32_t didt_block;
 	uint32_t data;
 
-	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_SQRamping)) {
-		data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_SQ_CTRL0);
-		data &= ~DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK;
-		data |= ((en << DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK);
-		cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_SQ_CTRL0, data);
-		DIDTBlock_Info &= ~SQ_Enable_MASK;
-		DIDTBlock_Info |= en << SQ_Enable_SHIFT;
-	}
+	if (hwmgr->chip_id == CHIP_POLARIS11)
+		didt_block = Polaris11_DIDTBlock_Info;
+	else
+		didt_block = DIDTBlock_Info;
 
-	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DBRamping)) {
-		data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_DB_CTRL0);
-		data &= ~DIDT_DB_CTRL0__DIDT_CTRL_EN_MASK;
-		data |= ((en << DIDT_DB_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_DB_CTRL0__DIDT_CTRL_EN_MASK);
-		cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_DB_CTRL0, data);
-		DIDTBlock_Info &= ~DB_Enable_MASK;
-		DIDTBlock_Info |= en << DB_Enable_SHIFT;
-	}
+	block_en = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_SQRamping) ? en : 0;
 
-	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TDRamping)) {
-		data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TD_CTRL0);
-		data &= ~DIDT_TD_CTRL0__DIDT_CTRL_EN_MASK;
-		data |= ((en << DIDT_TD_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_TD_CTRL0__DIDT_CTRL_EN_MASK);
-		cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TD_CTRL0, data);
-		DIDTBlock_Info &= ~TD_Enable_MASK;
-		DIDTBlock_Info |= en << TD_Enable_SHIFT;
-	}
+	data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_SQ_CTRL0);
+	data &= ~DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK;
+	data |= ((block_en << DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK);
+	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_SQ_CTRL0, data);
+	didt_block &= ~SQ_Enable_MASK;
+	didt_block |= block_en << SQ_Enable_SHIFT;
+
+	block_en = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DBRamping) ? en : 0;
+
+	data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_DB_CTRL0);
+	data &= ~DIDT_DB_CTRL0__DIDT_CTRL_EN_MASK;
+	data |= ((block_en << DIDT_DB_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_DB_CTRL0__DIDT_CTRL_EN_MASK);
+	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_DB_CTRL0, data);
+	didt_block &= ~DB_Enable_MASK;
+	didt_block |= block_en << DB_Enable_SHIFT;
+
+	block_en = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TDRamping) ? en : 0;
+	data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TD_CTRL0);
+	data &= ~DIDT_TD_CTRL0__DIDT_CTRL_EN_MASK;
+	data |= ((block_en << DIDT_TD_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_TD_CTRL0__DIDT_CTRL_EN_MASK);
+	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TD_CTRL0, data);
+	didt_block &= ~TD_Enable_MASK;
+	didt_block |= block_en << TD_Enable_SHIFT;
+
+	block_en = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TCPRamping) ? en : 0;
+
+	data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TCP_CTRL0);
+	data &= ~DIDT_TCP_CTRL0__DIDT_CTRL_EN_MASK;
+	data |= ((block_en << DIDT_TCP_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_TCP_CTRL0__DIDT_CTRL_EN_MASK);
+	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TCP_CTRL0, data);
+	didt_block &= ~TCP_Enable_MASK;
+	didt_block |= block_en << TCP_Enable_SHIFT;
 
-	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TCPRamping)) {
-		data = cgs_read_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TCP_CTRL0);
-		data &= ~DIDT_TCP_CTRL0__DIDT_CTRL_EN_MASK;
-		data |= ((en << DIDT_TCP_CTRL0__DIDT_CTRL_EN__SHIFT) & DIDT_TCP_CTRL0__DIDT_CTRL_EN_MASK);
-		cgs_write_ind_register(hwmgr->device, CGS_IND_REG__DIDT, ixDIDT_TCP_CTRL0, data);
-		DIDTBlock_Info &= ~TCP_Enable_MASK;
-		DIDTBlock_Info |= en << TCP_Enable_SHIFT;
-	}
 
 	if (enable)
-		result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, PPSMC_MSG_Didt_Block_Function, DIDTBlock_Info);
+		result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, PPSMC_MSG_Didt_Block_Function, didt_block);
 
 	return result;
 }
@@ -498,7 +605,6 @@ int smu7_enable_didt_config(struct pp_hwmgr *hwmgr)
 	sys_info.info_id = CGS_SYSTEM_INFO_GFX_SE_INFO;
 	result = cgs_query_system_info(hwmgr->device, &sys_info);
 
-
 	if (result == 0)
 		num_se = sys_info.value;
 
@@ -507,7 +613,7 @@ int smu7_enable_didt_config(struct pp_hwmgr *hwmgr)
 		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TDRamping) ||
 		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TCPRamping)) {
 
-		/* TO DO Pre DIDT disable clock gating */
+		cgs_enter_safe_mode(hwmgr->device, true);
 		value = 0;
 		value2 = cgs_read_register(hwmgr->device, mmGRBM_GFX_INDEX);
 		for (count = 0; count < num_se; count++) {
@@ -521,11 +627,16 @@ int smu7_enable_didt_config(struct pp_hwmgr *hwmgr)
 				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
 				result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris10);
 				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
-			} else if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id == CHIP_POLARIS12)) {
+			} else if (hwmgr->chip_id == CHIP_POLARIS11) {
 				result = smu7_program_pt_config_registers(hwmgr, GCCACConfig_Polaris11);
 				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
 				result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris11);
 				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
+			} else if (hwmgr->chip_id == CHIP_POLARIS12) {
+				result = smu7_program_pt_config_registers(hwmgr, GCCACConfig_Polaris11);
+				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
+				result = smu7_program_pt_config_registers(hwmgr, DIDTConfig_Polaris12);
+				PP_ASSERT_WITH_CODE((result == 0), "DIDT Config failed.", return result);
 			}
 		}
 		cgs_write_register(hwmgr->device, mmGRBM_GFX_INDEX, value2);
@@ -533,7 +644,13 @@ int smu7_enable_didt_config(struct pp_hwmgr *hwmgr)
 		result = smu7_enable_didt(hwmgr, true);
 		PP_ASSERT_WITH_CODE((result == 0), "EnableDiDt failed.", return result);
 
-		/* TO DO Post DIDT enable clock gating */
+		if (hwmgr->chip_id == CHIP_POLARIS11) {
+			result = smum_send_msg_to_smc(hwmgr->smumgr,
+						(uint16_t)(PPSMC_MSG_EnableDpmDidt));
+			PP_ASSERT_WITH_CODE((0 == result),
+					"Failed to enable DPM DIDT.", return result);
+		}
+		cgs_enter_safe_mode(hwmgr->device, false);
 	}
 
 	return 0;
@@ -547,11 +664,20 @@ int smu7_disable_didt_config(struct pp_hwmgr *hwmgr)
 		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DBRamping) ||
 		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TDRamping) ||
 		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_TCPRamping)) {
-		/* TO DO Pre DIDT disable clock gating */
+
+		cgs_enter_safe_mode(hwmgr->device, true);
 
 		result = smu7_enable_didt(hwmgr, false);
-		PP_ASSERT_WITH_CODE((result == 0), "Post DIDT enable clock gating failed.", return result);
-		/* TO DO Post DIDT enable clock gating */
+		PP_ASSERT_WITH_CODE((result == 0),
+				"Post DIDT enable clock gating failed.",
+				return result);
+		if (hwmgr->chip_id == CHIP_POLARIS11) {
+			result = smum_send_msg_to_smc(hwmgr->smumgr,
+						(uint16_t)(PPSMC_MSG_DisableDpmDidt));
+			PP_ASSERT_WITH_CODE((0 == result),
+					"Failed to disable DPM DIDT.", return result);
+		}
+		cgs_enter_safe_mode(hwmgr->device, false);
 	}
 
 	return 0;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
  2017-01-09  6:51   ` [PATCH 2/4] drm/amd/powerplay: add new smu message Rex Zhu
  2017-01-09  6:51   ` [PATCH 3/4] drm/amd/powerplay: refine DIDT feature in Powerplay Rex Zhu
@ 2017-01-09  6:51   ` Rex Zhu
       [not found]     ` <1483944700-3842-4-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
  2017-01-09 14:11   ` [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11 Deucher, Alexander
  3 siblings, 1 reply; 17+ messages in thread
From: Rex Zhu @ 2017-01-09  6:51 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Rex Zhu

in profiling mode, powerplay will fix power state
as stable as possible.and disable gfx cg and LBPW feature.

profile_standard: as a prerequisite, ensure power and thermal
sustainable, set clocks ratio as close to the highest clock
ratio as possible.
profile_min_sclk: fix mclk as profile_normal, set lowest sclk
profile_min_mclk: fix sclk as profile_normal, set lowest mclk
profile_peak: set highest sclk and mclk, power and thermal not
sustainable
profile_exit: exit profile mode. enable gfx cg/lbpw feature.

Change-Id: I217b8cfb98b6a628a11091776debe039bad997a2
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c           |  38 ++++---
 drivers/gpu/drm/amd/amdgpu/ci_dpm.c              |   5 +-
 drivers/gpu/drm/amd/include/amd_shared.h         |   6 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c   |   3 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 127 ++++++++++++++++++++++-
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h        |   1 +
 6 files changed, 154 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index 8438642..a6a4d36 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -120,12 +120,15 @@ static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,
 
 	level = amdgpu_dpm_get_performance_level(adev);
 	return snprintf(buf, PAGE_SIZE, "%s\n",
-			(level & (AMD_DPM_FORCED_LEVEL_AUTO) ? "auto" :
-			(level & AMD_DPM_FORCED_LEVEL_LOW) ? "low" :
-			(level & AMD_DPM_FORCED_LEVEL_HIGH) ? "high" :
-			(level & AMD_DPM_FORCED_LEVEL_MANUAL) ? "manual" :
-			(level & AMD_DPM_FORCED_LEVEL_PROFILING) ? "profiling" :
-			"unknown"));
+			(level == AMD_DPM_FORCED_LEVEL_AUTO) ? "auto" :
+			(level == AMD_DPM_FORCED_LEVEL_LOW) ? "low" :
+			(level == AMD_DPM_FORCED_LEVEL_HIGH) ? "high" :
+			(level == AMD_DPM_FORCED_LEVEL_MANUAL) ? "manual" :
+			(level == AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD) ? "profile_standard" :
+			(level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) ? "profile_min_sclk" :
+			(level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK) ? "profile_min_mclk" :
+			(level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) ? "profile_peak" :
+			"unknown");
 }
 
 static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
@@ -154,9 +157,17 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 		level = AMD_DPM_FORCED_LEVEL_AUTO;
 	} else if (strncmp("manual", buf, strlen("manual")) == 0) {
 		level = AMD_DPM_FORCED_LEVEL_MANUAL;
-	} else if (strncmp("profile", buf, strlen("profile")) == 0) {
-		level = AMD_DPM_FORCED_LEVEL_PROFILING;
-	} else {
+	} else if (strncmp("profile_exit", buf, strlen("profile_exit")) == 0) {
+		level = AMD_DPM_FORCED_LEVEL_PROFILE_EXIT;
+	} else if (strncmp("profile_standard", buf, strlen("profile_standard")) == 0) {
+		level = AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD;
+	} else if (strncmp("profile_min_sclk", buf, strlen("profile_min_sclk")) == 0) {
+		level = AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK;
+	} else if (strncmp("profile_min_mclk", buf, strlen("profile_min_mclk")) == 0) {
+		level = AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK;
+	} else if (strncmp("profile_peak", buf, strlen("profile_peak")) == 0) {
+		level = AMD_DPM_FORCED_LEVEL_PROFILE_PEAK;
+	}  else {
 		count = -EINVAL;
 		goto fail;
 	}
@@ -164,14 +175,6 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 	if (current_level == level)
 		return 0;
 
-	if (level == AMD_DPM_FORCED_LEVEL_PROFILING)
-		amdgpu_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_GFX,
-						AMD_CG_STATE_UNGATE);
-	else if (level != AMD_DPM_FORCED_LEVEL_PROFILING &&
-			current_level == AMD_DPM_FORCED_LEVEL_PROFILING)
-		amdgpu_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_GFX,
-						AMD_CG_STATE_GATE);
-
 	if (adev->pp_enabled)
 		amdgpu_dpm_force_performance_level(adev, level);
 	else {
@@ -188,6 +191,7 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 			adev->pm.dpm.forced_level = level;
 		mutex_unlock(&adev->pm.mutex);
 	}
+
 fail:
 	return count;
 }
diff --git a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
index 9a544ad..ece94ee 100644
--- a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
@@ -6571,8 +6571,9 @@ static int ci_dpm_force_clock_level(struct amdgpu_device *adev,
 {
 	struct ci_power_info *pi = ci_get_pi(adev);
 
-	if (!(adev->pm.dpm.forced_level &
-		(AMD_DPM_FORCED_LEVEL_MANUAL | AMD_DPM_FORCED_LEVEL_PROFILING)))
+	if (adev->pm.dpm.forced_level & (AMD_DPM_FORCED_LEVEL_AUTO |
+				AMD_DPM_FORCED_LEVEL_LOW |
+				AMD_DPM_FORCED_LEVEL_HIGH))
 		return -EINVAL;
 
 	switch (type) {
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index 92138a9..6fb5870 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -85,7 +85,11 @@ enum amd_dpm_forced_level {
 	AMD_DPM_FORCED_LEVEL_MANUAL = 0x2,
 	AMD_DPM_FORCED_LEVEL_LOW = 0x4,
 	AMD_DPM_FORCED_LEVEL_HIGH = 0x8,
-	AMD_DPM_FORCED_LEVEL_PROFILING = 0x10,
+	AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD = 0x10,
+	AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK = 0x20,
+	AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK = 0x40,
+	AMD_DPM_FORCED_LEVEL_PROFILE_PEAK = 0x80,
+	AMD_DPM_FORCED_LEVEL_PROFILE_EXIT = 0x100,
 };
 
 enum amd_powergating_state {
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
index 93c1384..0668b0b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
@@ -1642,8 +1642,7 @@ static int cz_get_dal_power_level(struct pp_hwmgr *hwmgr,
 static int cz_force_clock_level(struct pp_hwmgr *hwmgr,
 		enum pp_clock_type type, uint32_t mask)
 {
-	if (!(hwmgr->dpm_level &
-		(AMD_DPM_FORCED_LEVEL_MANUAL | AMD_DPM_FORCED_LEVEL_PROFILING)))
+	if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL)
 		return -EINVAL;
 
 	switch (type) {
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index acd038cc..8ced2f4 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -90,6 +90,8 @@ enum DPM_EVENT_SRC {
 };
 
 static const unsigned long PhwVIslands_Magic = (unsigned long)(PHM_VIslands_Magic);
+static int smu7_force_clock_level(struct pp_hwmgr *hwmgr,
+		enum pp_clock_type type, uint32_t mask);
 
 static struct smu7_power_state *cast_phw_smu7_power_state(
 				  struct pp_hw_power_state *hw_ps)
@@ -2490,36 +2492,152 @@ static int smu7_force_dpm_lowest(struct pp_hwmgr *hwmgr)
 	}
 
 	return 0;
+}
+
+static int smu7_get_profiling_clk(struct pp_hwmgr *hwmgr, enum amd_dpm_forced_level level,
+				uint32_t *sclk_mask, uint32_t *mclk_mask, uint32_t *pcie_mask)
+{
+	uint32_t percentage;
+	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
+	struct smu7_dpm_table *golden_dpm_table = &data->golden_dpm_table;
+	int32_t tmp_mclk;
+	int32_t tmp_sclk;
+	int32_t count;
+
+	if (golden_dpm_table->mclk_table.count < 1)
+		return -EINVAL;
+
+	percentage = 100 * golden_dpm_table->sclk_table.dpm_levels[golden_dpm_table->sclk_table.count - 1].value /
+			golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 1].value;
+
+	if (golden_dpm_table->mclk_table.count == 1) {
+		percentage = 70;
+		tmp_mclk = golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 1].value;
+		*mclk_mask = golden_dpm_table->mclk_table.count - 1;
+	} else {
+		tmp_mclk = golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 2].value;
+		*mclk_mask = golden_dpm_table->mclk_table.count - 2;
+	}
+
+	tmp_sclk = tmp_mclk * percentage / 100;
+
+	if (hwmgr->pp_table_version == PP_TABLE_V0) {
+		for (count = hwmgr->dyn_state.vddc_dependency_on_sclk->count-1;
+			count >= 0; count--) {
+			if (tmp_sclk >= hwmgr->dyn_state.vddc_dependency_on_sclk->entries[count].clk) {
+				tmp_sclk = hwmgr->dyn_state.vddc_dependency_on_sclk->entries[count].clk;
+				*sclk_mask = count;
+				break;
+			}
+		}
+		if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK)
+			*sclk_mask = 0;
+
+		if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
+			*sclk_mask = hwmgr->dyn_state.vddc_dependency_on_sclk->count-1;
+	} else if (hwmgr->pp_table_version == PP_TABLE_V1) {
+		struct phm_ppt_v1_information *table_info =
+				(struct phm_ppt_v1_information *)(hwmgr->pptable);
 
+		for (count = table_info->vdd_dep_on_sclk->count-1; count >= 0; count--) {
+			if (tmp_sclk >= table_info->vdd_dep_on_sclk->entries[count].clk) {
+				tmp_sclk = table_info->vdd_dep_on_sclk->entries[count].clk;
+				*sclk_mask = count;
+				break;
+			}
+		}
+		if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK)
+			*sclk_mask = 0;
+
+		if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
+			*sclk_mask = table_info->vdd_dep_on_sclk->count - 1;
+	}
+
+	if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK)
+		*mclk_mask = 0;
+	else if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
+		*mclk_mask = golden_dpm_table->mclk_table.count - 1;
+
+	*pcie_mask = data->dpm_table.pcie_speed_table.count - 1;
+	return 0;
 }
+
 static int smu7_force_dpm_level(struct pp_hwmgr *hwmgr,
 				enum amd_dpm_forced_level level)
 {
 	int ret = 0;
+	uint32_t sclk_mask = 0;
+	uint32_t mclk_mask = 0;
+	uint32_t pcie_mask = 0;
+	uint32_t profile_mode_mask = AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD |
+					AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK |
+					AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK |
+					AMD_DPM_FORCED_LEVEL_PROFILE_PEAK;
+
+	if (level == hwmgr->dpm_level)
+		return ret;
+
+	if (!(hwmgr->dpm_level & profile_mode_mask)) {
+		/* enter profile mode, save current level, disable gfx cg*/
+		if (level & profile_mode_mask) {
+			hwmgr->saved_dpm_level = hwmgr->dpm_level;
+			cgs_set_clockgating_state(hwmgr->device,
+						AMD_IP_BLOCK_TYPE_GFX,
+						AMD_CG_STATE_UNGATE);
+		}
+	} else {
+		/* exit profile mode, restore level, enable gfx cg*/
+		if (!(level & profile_mode_mask)) {
+			if (level == AMD_DPM_FORCED_LEVEL_PROFILE_EXIT)
+				level = hwmgr->saved_dpm_level;
+			cgs_set_clockgating_state(hwmgr->device,
+					AMD_IP_BLOCK_TYPE_GFX,
+					AMD_CG_STATE_GATE);
+		}
+	}
 
 	switch (level) {
 	case AMD_DPM_FORCED_LEVEL_HIGH:
 		ret = smu7_force_dpm_highest(hwmgr);
 		if (ret)
 			return ret;
+		hwmgr->dpm_level = level;
 		break;
 	case AMD_DPM_FORCED_LEVEL_LOW:
 		ret = smu7_force_dpm_lowest(hwmgr);
 		if (ret)
 			return ret;
+		hwmgr->dpm_level = level;
 		break;
 	case AMD_DPM_FORCED_LEVEL_AUTO:
 		ret = smu7_unforce_dpm_levels(hwmgr);
 		if (ret)
 			return ret;
+		hwmgr->dpm_level = level;
+		break;
+	case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD:
+	case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK:
+	case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK:
+	case AMD_DPM_FORCED_LEVEL_PROFILE_PEAK:
+		ret = smu7_get_profiling_clk(hwmgr, level, &sclk_mask, &mclk_mask, &pcie_mask);
+		if (ret)
+			return ret;
+		hwmgr->dpm_level = level;
+		smu7_force_clock_level(hwmgr, PP_SCLK, 1<<sclk_mask);
+		smu7_force_clock_level(hwmgr, PP_MCLK, 1<<mclk_mask);
+		smu7_force_clock_level(hwmgr, PP_PCIE, 1<<pcie_mask);
 		break;
+	case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT:
 	default:
 		break;
 	}
 
-	hwmgr->dpm_level = level;
+	if (level & (AMD_DPM_FORCED_LEVEL_PROFILE_PEAK | AMD_DPM_FORCED_LEVEL_HIGH))
+		smu7_fan_ctrl_set_fan_speed_percent(hwmgr, 100);
+	else
+		smu7_fan_ctrl_reset_fan_speed_to_default(hwmgr);
 
-	return ret;
+	return 0;
 }
 
 static int smu7_get_power_state_size(struct pp_hwmgr *hwmgr)
@@ -4053,8 +4171,9 @@ static int smu7_force_clock_level(struct pp_hwmgr *hwmgr,
 {
 	struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
 
-	if (!(hwmgr->dpm_level &
-		(AMD_DPM_FORCED_LEVEL_MANUAL | AMD_DPM_FORCED_LEVEL_PROFILING)))
+	if (hwmgr->dpm_level & (AMD_DPM_FORCED_LEVEL_AUTO |
+				AMD_DPM_FORCED_LEVEL_LOW |
+				AMD_DPM_FORCED_LEVEL_HIGH))
 		return -EINVAL;
 
 	switch (type) {
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 27217a7..7275a29 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -612,6 +612,7 @@ struct pp_hwmgr {
 	uint32_t num_vce_state_tables;
 
 	enum amd_dpm_forced_level dpm_level;
+	enum amd_dpm_forced_level saved_dpm_level;
 	bool block_hw_access;
 	struct phm_gfx_arbiter gfx_arbiter;
 	struct phm_acp_arbiter acp_arbiter;
-- 
1.9.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* RE: [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11.
       [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-01-09  6:51   ` [PATCH 4/4] drm/amdgpu: extend profiling mode Rex Zhu
@ 2017-01-09 14:11   ` Deucher, Alexander
       [not found]     ` <BN6PR12MB1652DFA6EFFE887DC37BC71BF7640-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  3 siblings, 1 reply; 17+ messages in thread
From: Deucher, Alexander @ 2017-01-09 14:11 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Zhu, Rex

> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Monday, January 09, 2017 1:52 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex
> Subject: [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ
> enabled on Polaris11.
> 
> following firmware's request.
> 
> Change-Id: I0098144cae57727999101152e973338ddffec28e
> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 21 ++++++++++++-----
> ----
>  1 file changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> index 2f6225e..2ea9c0e 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> @@ -767,17 +767,10 @@ int phm_get_voltage_evv_on_sclk(struct
> pp_hwmgr *hwmgr, uint8_t voltage_type,
> 
>  int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)
>  {
> -	/* power tune caps Assume disabled */
> +
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_SQRamping);
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_DBRamping);
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_TDRamping);
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_TCPRamping);
> -
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_RegulatorHot);
> 
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> @@ -786,9 +779,19 @@ int polaris_set_asic_special_caps(struct pp_hwmgr
> *hwmgr)
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_TablelessHardwareInterface);
> 
> -	if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id ==
> CHIP_POLARIS12))
> +
> +	if (hwmgr->chip_id != CHIP_POLARIS10)
>  		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_SPLLShutdownSupport);
> +
> +	if (hwmgr->chip_id != CHIP_POLARIS11) {

This will set the caps on Polaris10 as well?  Intended?

> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_DBRamping);
> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_TDRamping);
> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_TCPRamping);
> +	}
>  	return 0;
>  }
> 
> --
> 1.9.1
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH 2/4] drm/amd/powerplay: add new smu message.
       [not found]     ` <1483944700-3842-2-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
@ 2017-01-09 14:18       ` Deucher, Alexander
  0 siblings, 0 replies; 17+ messages in thread
From: Deucher, Alexander @ 2017-01-09 14:18 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Zhu, Rex

> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Rex Zhu
> Sent: Monday, January 09, 2017 1:52 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex
> Subject: [PATCH 2/4] drm/amd/powerplay: add new smu message.
> 
> Change-Id: I17f02555fbc79a9e5a2e9d3160fddbc6c502eaf4
> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>

Patches 2-4:
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

> ---
>  drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
> b/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
> index bce0009..fbc504c 100644
> --- a/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
> +++ b/drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
> @@ -394,6 +394,9 @@ typedef uint16_t PPSMC_Result;
> 
>  #define PPSMC_MSG_SetVBITimeout               ((uint16_t) 0x306)
> 
> +#define PPSMC_MSG_EnableDpmDidt               ((uint16_t) 0x309)
> +#define PPSMC_MSG_DisableDpmDidt              ((uint16_t) 0x30A)
> +
>  #define PPSMC_MSG_SecureSRBMWrite             ((uint16_t) 0x600)
>  #define PPSMC_MSG_SecureSRBMRead              ((uint16_t) 0x601)
>  #define PPSMC_MSG_SetAddress                  ((uint16_t) 0x800)
> --
> 1.9.1
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11.
       [not found]     ` <BN6PR12MB1652DFA6EFFE887DC37BC71BF7640-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-01-10  9:41       ` Zhu, Rex
  0 siblings, 0 replies; 17+ messages in thread
From: Zhu, Rex @ 2017-01-10  9:41 UTC (permalink / raw)
  To: Deucher, Alexander, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Hi Alex,

>>> This will set the caps on Polaris10 as well?  Intended?

Rex:  Yes, Currently, New didt messages were only supported on Polaris11. 

Best Regards
Rex
 
-----Original Message-----
From: Deucher, Alexander 
Sent: Monday, January 09, 2017 10:12 PM
To: Zhu, Rex; amd-gfx@lists.freedesktop.org
Cc: Zhu, Rex
Subject: RE: [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11.

> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf 
> Of Rex Zhu
> Sent: Monday, January 09, 2017 1:52 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Zhu, Rex
> Subject: [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only 
> SQ enabled on Polaris11.
> 
> following firmware's request.
> 
> Change-Id: I0098144cae57727999101152e973338ddffec28e
> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
> ---
>  drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 21 ++++++++++++-----
> ----
>  1 file changed, 12 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> index 2f6225e..2ea9c0e 100644
> --- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
> @@ -767,17 +767,10 @@ int phm_get_voltage_evv_on_sclk(struct
> pp_hwmgr *hwmgr, uint8_t voltage_type,
> 
>  int polaris_set_asic_special_caps(struct pp_hwmgr *hwmgr)  {
> -	/* power tune caps Assume disabled */
> +
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_SQRamping);
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_DBRamping);
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_TDRamping);
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> -
> 	PHM_PlatformCaps_TCPRamping);
> -
> -	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_RegulatorHot);
> 
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> @@ -786,9 +779,19 @@ int polaris_set_asic_special_caps(struct pp_hwmgr
> *hwmgr)
>  	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_TablelessHardwareInterface);
> 
> -	if ((hwmgr->chip_id == CHIP_POLARIS11) || (hwmgr->chip_id ==
> CHIP_POLARIS12))
> +
> +	if (hwmgr->chip_id != CHIP_POLARIS10)
>  		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> 
> 	PHM_PlatformCaps_SPLLShutdownSupport);
> +
> +	if (hwmgr->chip_id != CHIP_POLARIS11) {

This will set the caps on Polaris10 as well?  Intended?

> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_DBRamping);
> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_TDRamping);
> +		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
> +
> 	PHM_PlatformCaps_TCPRamping);
> +	}
>  	return 0;
>  }
> 
> --
> 1.9.1
> 
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]     ` <1483944700-3842-4-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
@ 2017-01-11 22:46       ` Andy Furniss
       [not found]         ` <5876B5C8.1010401-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Furniss @ 2017-01-11 22:46 UTC (permalink / raw)
  To: Rex Zhu, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Rex Zhu wrote:
> in profiling mode, powerplay will fix power state
> as stable as possible.and disable gfx cg and LBPW feature.
>
> profile_standard: as a prerequisite, ensure power and thermal
> sustainable, set clocks ratio as close to the highest clock
> ratio as possible.
> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
> profile_peak: set highest sclk and mclk, power and thermal not
> sustainable
> profile_exit: exit profile mode. enable gfx cg/lbpw feature.

Testing R9 285 Tonga on drm-next-4.11-wip

This commit has the effect that doing

echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level

instantly forces fan to (I guess) max, where normally it doesn't
need anything like as fast with the clocks high when doing nothing else.

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]         ` <5876B5C8.1010401-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2017-01-31 23:19           ` Andy Furniss
       [not found]             ` <58911B91.50008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Furniss @ 2017-01-31 23:19 UTC (permalink / raw)
  To: Rex Zhu, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW

Andy Furniss wrote:
> Rex Zhu wrote:
>> in profiling mode, powerplay will fix power state
>> as stable as possible.and disable gfx cg and LBPW feature.
>>
>> profile_standard: as a prerequisite, ensure power and thermal
>> sustainable, set clocks ratio as close to the highest clock
>> ratio as possible.
>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>> profile_peak: set highest sclk and mclk, power and thermal not
>> sustainable
>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>
> Testing R9 285 Tonga on drm-next-4.11-wip
>
> This commit has the effect that doing
>
> echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
>
> instantly forces fan to (I guess) max, where normally it doesn't
> need anything like as fast with the clocks high when doing nothing else.

Ping - just in case this got missed, still the same on current 
drm-next-4.11-wip


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]             ` <58911B91.50008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2017-02-01 14:42               ` Alex Deucher
       [not found]                 ` <CADnq5_N27CHyxLZQMypKUsBGO3tfGcbW3+aO-vQM+=7NShghCQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Alex Deucher @ 2017-02-01 14:42 UTC (permalink / raw)
  To: Andy Furniss; +Cc: Rex Zhu, amd-gfx list

On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
> Andy Furniss wrote:
>>
>> Rex Zhu wrote:
>>>
>>> in profiling mode, powerplay will fix power state
>>> as stable as possible.and disable gfx cg and LBPW feature.
>>>
>>> profile_standard: as a prerequisite, ensure power and thermal
>>> sustainable, set clocks ratio as close to the highest clock
>>> ratio as possible.
>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>> profile_peak: set highest sclk and mclk, power and thermal not
>>> sustainable
>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>
>>
>> Testing R9 285 Tonga on drm-next-4.11-wip
>>
>> This commit has the effect that doing
>>
>> echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
>>
>> instantly forces fan to (I guess) max, where normally it doesn't
>> need anything like as fast with the clocks high when doing nothing else.
>
>
> Ping - just in case this got missed, still the same on current
> drm-next-4.11-wip

Just a heads up, Rex was looking at this, but it's Chinese New Year this week.

Alex
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                 ` <CADnq5_N27CHyxLZQMypKUsBGO3tfGcbW3+aO-vQM+=7NShghCQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-02-01 15:36                   ` Andy Furniss
       [not found]                     ` <5892008F.8030008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Furniss @ 2017-02-01 15:36 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Rex Zhu, amd-gfx list

Alex Deucher wrote:
> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
>> Andy Furniss wrote:
>>>
>>> Rex Zhu wrote:
>>>>
>>>> in profiling mode, powerplay will fix power state
>>>> as stable as possible.and disable gfx cg and LBPW feature.
>>>>
>>>> profile_standard: as a prerequisite, ensure power and thermal
>>>> sustainable, set clocks ratio as close to the highest clock
>>>> ratio as possible.
>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>> profile_peak: set highest sclk and mclk, power and thermal not
>>>> sustainable
>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>
>>>
>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>
>>> This commit has the effect that doing
>>>
>>> echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>
>>> instantly forces fan to (I guess) max, where normally it doesn't
>>> need anything like as fast with the clocks high when doing nothing else.
>>
>>
>> Ping - just in case this got missed, still the same on current
>> drm-next-4.11-wip
>
> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.

OK, thanks & Happy new year.


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                     ` <5892008F.8030008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2017-02-06 11:56                       ` Zhu, Rex
       [not found]                         ` <CY4PR12MB1687B1342B6DE8004D0B08E7FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Zhu, Rex @ 2017-02-06 11:56 UTC (permalink / raw)
  To: Andy Furniss, Alex Deucher; +Cc: amd-gfx list

Sorry for the late response.

Yes, I set the fan speed to max in this patch when user set high performance.

Considering that: 1. set fan speed to max is helpful to let GPU run under highest clock as long as possible.
	           2. avoid GPU rapid temperature rise in some case.


Best Regards
Rex
  


-----Original Message-----
From: Andy Furniss [mailto:adf.lists@gmail.com] 
Sent: Wednesday, February 01, 2017 11:37 PM
To: Alex Deucher
Cc: Zhu, Rex; amd-gfx list
Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.

Alex Deucher wrote:
> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
>> Andy Furniss wrote:
>>>
>>> Rex Zhu wrote:
>>>>
>>>> in profiling mode, powerplay will fix power state as stable as 
>>>> possible.and disable gfx cg and LBPW feature.
>>>>
>>>> profile_standard: as a prerequisite, ensure power and thermal 
>>>> sustainable, set clocks ratio as close to the highest clock ratio 
>>>> as possible.
>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>> profile_peak: set highest sclk and mclk, power and thermal not 
>>>> sustainable
>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>
>>>
>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>
>>> This commit has the effect that doing
>>>
>>> echo high > 
>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>
>>> instantly forces fan to (I guess) max, where normally it doesn't 
>>> need anything like as fast with the clocks high when doing nothing else.
>>
>>
>> Ping - just in case this got missed, still the same on current 
>> drm-next-4.11-wip
>
> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.

OK, thanks & Happy new year.


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                         ` <CY4PR12MB1687B1342B6DE8004D0B08E7FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-02-06 12:32                           ` Andy Furniss
       [not found]                             ` <58986CC3.40307-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Furniss @ 2017-02-06 12:32 UTC (permalink / raw)
  To: Zhu, Rex, Alex Deucher; +Cc: amd-gfx list

Zhu, Rex wrote:
> Sorry for the late response.
>
> Yes, I set the fan speed to max in this patch when user set high performance.
>
> Considering that: 1. set fan speed to max is helpful to let GPU run under highest clock as long as possible.
> 	           2. avoid GPU rapid temperature rise in some case.

OK, I guess you know if it's needed or not.

It is somewhat annoying noise wise, maybe one day there will be a way 
for users to set fan back on auto?

Accepting I don't know how the h/w works your point 2 would imply that
with perf on auto doing something like running a benchmark that pegs
all the clocks high should also instantly force fan high?

I don't think that would be popular WRT noise.

>
>
> Best Regards
> Rex
>
>
>
> -----Original Message-----
> From: Andy Furniss [mailto:adf.lists@gmail.com]
> Sent: Wednesday, February 01, 2017 11:37 PM
> To: Alex Deucher
> Cc: Zhu, Rex; amd-gfx list
> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Alex Deucher wrote:
>> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
>>> Andy Furniss wrote:
>>>>
>>>> Rex Zhu wrote:
>>>>>
>>>>> in profiling mode, powerplay will fix power state as stable as
>>>>> possible.and disable gfx cg and LBPW feature.
>>>>>
>>>>> profile_standard: as a prerequisite, ensure power and thermal
>>>>> sustainable, set clocks ratio as close to the highest clock ratio
>>>>> as possible.
>>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>>> profile_peak: set highest sclk and mclk, power and thermal not
>>>>> sustainable
>>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>>
>>>>
>>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>>
>>>> This commit has the effect that doing
>>>>
>>>> echo high >
>>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>>
>>>> instantly forces fan to (I guess) max, where normally it doesn't
>>>> need anything like as fast with the clocks high when doing nothing else.
>>>
>>>
>>> Ping - just in case this got missed, still the same on current
>>> drm-next-4.11-wip
>>
>> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.
>
> OK, thanks & Happy new year.
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                             ` <58986CC3.40307-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2017-02-06 14:09                               ` Zhu, Rex
       [not found]                                 ` <CY4PR12MB1687CA7CF3127858CA7C2120FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Zhu, Rex @ 2017-02-06 14:09 UTC (permalink / raw)
  To: Andy Furniss, Alex Deucher; +Cc: amd-gfx list

Please see in line.

Best Regards
Rex

-----Original Message-----
From: Andy Furniss [mailto:adf.lists@gmail.com] 
Sent: Monday, February 06, 2017 8:32 PM
To: Zhu, Rex; Alex Deucher
Cc: amd-gfx list
Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.

Zhu, Rex wrote:
> Sorry for the late response.
>
> Yes, I set the fan speed to max in this patch when user set high performance.
>
> Considering that: 1. set fan speed to max is helpful to let GPU run under highest clock as long as possible.
> 	           2. avoid GPU rapid temperature rise in some case.

>>OK, I guess you know if it's needed or not.
>>It is somewhat annoying noise wise, maybe one day there will be a way for users to set fan back on auto?

Rex: sure, you can echo "2"> /sys/class/drm/card0/device/hwmon/hwmon3(maybe not 3)/pwm1_enable 

>>Accepting I don't know how the h/w works your point 2 would imply that with perf on auto doing something like running a benchmark that pegs all the clocks high should also 

Rex: based on the thermal,  the fan speed.will be dynamically adjusted.

>>I don't think that would be popular WRT noise.

Rex: maybe I worried unnecessarily. I will change the code not to set fan speed to max. thanks.



>
> Best Regards
> Rex
>
>
>
> -----Original Message-----
> From: Andy Furniss [mailto:adf.lists@gmail.com]
> Sent: Wednesday, February 01, 2017 11:37 PM
> To: Alex Deucher
> Cc: Zhu, Rex; amd-gfx list
> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Alex Deucher wrote:
>> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
>>> Andy Furniss wrote:
>>>>
>>>> Rex Zhu wrote:
>>>>>
>>>>> in profiling mode, powerplay will fix power state as stable as 
>>>>> possible.and disable gfx cg and LBPW feature.
>>>>>
>>>>> profile_standard: as a prerequisite, ensure power and thermal 
>>>>> sustainable, set clocks ratio as close to the highest clock ratio 
>>>>> as possible.
>>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>>> profile_peak: set highest sclk and mclk, power and thermal not 
>>>>> sustainable
>>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>>
>>>>
>>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>>
>>>> This commit has the effect that doing
>>>>
>>>> echo high >
>>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>>
>>>> instantly forces fan to (I guess) max, where normally it doesn't 
>>>> need anything like as fast with the clocks high when doing nothing else.
>>>
>>>
>>> Ping - just in case this got missed, still the same on current 
>>> drm-next-4.11-wip
>>
>> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.
>
> OK, thanks & Happy new year.
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                                 ` <CY4PR12MB1687CA7CF3127858CA7C2120FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-02-07  9:48                                   ` Zhu, Rex
       [not found]                                     ` <CY4PR12MB168710F77DE9F3F5E54A102CFB430-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  0 siblings, 1 reply; 17+ messages in thread
From: Zhu, Rex @ 2017-02-07  9:48 UTC (permalink / raw)
  To: Andy Furniss, Alex Deucher; +Cc: amd-gfx list


[-- Attachment #1.1: Type: text/plain, Size: 3576 bytes --]


Not set max fan speed in high performance level because of the fan noise.

Best Regards
Rex
________________________________
From: amd-gfx <amd-gfx-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org> on behalf of Zhu, Rex <Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
Sent: Monday, February 6, 2017 10:09 PM
To: Andy Furniss; Alex Deucher
Cc: amd-gfx list
Subject: RE: [PATCH 4/4] drm/amdgpu: extend profiling mode.

Please see in line.

Best Regards
Rex

-----Original Message-----
From: Andy Furniss [mailto:adf.lists-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
Sent: Monday, February 06, 2017 8:32 PM
To: Zhu, Rex; Alex Deucher
Cc: amd-gfx list
Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.

Zhu, Rex wrote:
> Sorry for the late response.
>
> Yes, I set the fan speed to max in this patch when user set high performance.
>
> Considering that: 1. set fan speed to max is helpful to let GPU run under highest clock as long as possible.
>                   2. avoid GPU rapid temperature rise in some case.

>>OK, I guess you know if it's needed or not.
>>It is somewhat annoying noise wise, maybe one day there will be a way for users to set fan back on auto?

Rex: sure, you can echo "2"> /sys/class/drm/card0/device/hwmon/hwmon3(maybe not 3)/pwm1_enable

>>Accepting I don't know how the h/w works your point 2 would imply that with perf on auto doing something like running a benchmark that pegs all the clocks high should also

Rex: based on the thermal,  the fan speed.will be dynamically adjusted.

>>I don't think that would be popular WRT noise.

Rex: maybe I worried unnecessarily. I will change the code not to set fan speed to max. thanks.



>
> Best Regards
> Rex
>
>
>
> -----Original Message-----
> From: Andy Furniss [mailto:adf.lists-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
> Sent: Wednesday, February 01, 2017 11:37 PM
> To: Alex Deucher
> Cc: Zhu, Rex; amd-gfx list
> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Alex Deucher wrote:
>> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>> Andy Furniss wrote:
>>>>
>>>> Rex Zhu wrote:
>>>>>
>>>>> in profiling mode, powerplay will fix power state as stable as
>>>>> possible.and disable gfx cg and LBPW feature.
>>>>>
>>>>> profile_standard: as a prerequisite, ensure power and thermal
>>>>> sustainable, set clocks ratio as close to the highest clock ratio
>>>>> as possible.
>>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>>> profile_peak: set highest sclk and mclk, power and thermal not
>>>>> sustainable
>>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>>
>>>>
>>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>>
>>>> This commit has the effect that doing
>>>>
>>>> echo high >
>>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>>
>>>> instantly forces fan to (I guess) max, where normally it doesn't
>>>> need anything like as fast with the clocks high when doing nothing else.
>>>
>>>
>>> Ping - just in case this got missed, still the same on current
>>> drm-next-4.11-wip
>>
>> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.
>
> OK, thanks & Happy new year.
>
>

_______________________________________________
amd-gfx mailing list
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[-- Attachment #1.2: Type: text/html, Size: 5488 bytes --]

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-drm-amd-powerplay-set-fan-speed-to-max-in-profile-pe.patch --]
[-- Type: text/x-patch; name="0001-drm-amd-powerplay-set-fan-speed-to-max-in-profile-pe.patch", Size: 1553 bytes --]

From 4e3f4155fac060a4d86d8bff61d461278dd7ac02 Mon Sep 17 00:00:00 2001
From: Rex Zhu <Rex.Zhu@amd.com>
Date: Tue, 7 Feb 2017 17:34:11 +0800
Subject: [PATCH] drm/amd/powerplay: set fan speed to max in profile peak mode
 only.

Change-Id: Ie9e789e6aad0f32841e0dd4229619bc103f702f8
Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index c72ce85..b1de9e8 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -2624,6 +2624,7 @@ static int smu7_force_dpm_level(struct pp_hwmgr *hwmgr,
 		smu7_force_clock_level(hwmgr, PP_SCLK, 1<<sclk_mask);
 		smu7_force_clock_level(hwmgr, PP_MCLK, 1<<mclk_mask);
 		smu7_force_clock_level(hwmgr, PP_PCIE, 1<<pcie_mask);
+
 		break;
 	case AMD_DPM_FORCED_LEVEL_MANUAL:
 		hwmgr->dpm_level = level;
@@ -2633,9 +2634,9 @@ static int smu7_force_dpm_level(struct pp_hwmgr *hwmgr,
 		break;
 	}
 
-	if (level & (AMD_DPM_FORCED_LEVEL_PROFILE_PEAK | AMD_DPM_FORCED_LEVEL_HIGH))
+	if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK && hwmgr->saved_dpm_level != AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
 		smu7_fan_ctrl_set_fan_speed_percent(hwmgr, 100);
-	else
+	else if (level != AMD_DPM_FORCED_LEVEL_PROFILE_PEAK && hwmgr->saved_dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
 		smu7_fan_ctrl_reset_fan_speed_to_default(hwmgr);
 
 	return 0;
-- 
1.9.1


[-- Attachment #3: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                                     ` <CY4PR12MB168710F77DE9F3F5E54A102CFB430-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-02-07 13:37                                       ` Andy Furniss
  2017-02-07 15:24                                       ` Alex Deucher
  1 sibling, 0 replies; 17+ messages in thread
From: Andy Furniss @ 2017-02-07 13:37 UTC (permalink / raw)
  To: Zhu, Rex, Alex Deucher; +Cc: amd-gfx list

Zhu, Rex wrote:
>
> Not set max fan speed in high performance level because of the fan noise.

Thanks, I tested the patch and all seems OK again.

>
> Best Regards
> Rex
> ________________________________
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of Zhu, Rex <Rex.Zhu@amd.com>
> Sent: Monday, February 6, 2017 10:09 PM
> To: Andy Furniss; Alex Deucher
> Cc: amd-gfx list
> Subject: RE: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Please see in line.
>
> Best Regards
> Rex
>
> -----Original Message-----
> From: Andy Furniss [mailto:adf.lists@gmail.com]
> Sent: Monday, February 06, 2017 8:32 PM
> To: Zhu, Rex; Alex Deucher
> Cc: amd-gfx list
> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Zhu, Rex wrote:
>> Sorry for the late response.
>>
>> Yes, I set the fan speed to max in this patch when user set high performance.
>>
>> Considering that: 1. set fan speed to max is helpful to let GPU run under highest clock as long as possible.
>>                    2. avoid GPU rapid temperature rise in some case.
>
>>> OK, I guess you know if it's needed or not.
>>> It is somewhat annoying noise wise, maybe one day there will be a way for users to set fan back on auto?
>
> Rex: sure, you can echo "2"> /sys/class/drm/card0/device/hwmon/hwmon3(maybe not 3)/pwm1_enable
>
>>> Accepting I don't know how the h/w works your point 2 would imply that with perf on auto doing something like running a benchmark that pegs all the clocks high should also
>
> Rex: based on the thermal,  the fan speed.will be dynamically adjusted.
>
>>> I don't think that would be popular WRT noise.
>
> Rex: maybe I worried unnecessarily. I will change the code not to set fan speed to max. thanks.
>
>
>
>>
>> Best Regards
>> Rex
>>
>>
>>
>> -----Original Message-----
>> From: Andy Furniss [mailto:adf.lists@gmail.com]
>> Sent: Wednesday, February 01, 2017 11:37 PM
>> To: Alex Deucher
>> Cc: Zhu, Rex; amd-gfx list
>> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>>
>> Alex Deucher wrote:
>>> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com> wrote:
>>>> Andy Furniss wrote:
>>>>>
>>>>> Rex Zhu wrote:
>>>>>>
>>>>>> in profiling mode, powerplay will fix power state as stable as
>>>>>> possible.and disable gfx cg and LBPW feature.
>>>>>>
>>>>>> profile_standard: as a prerequisite, ensure power and thermal
>>>>>> sustainable, set clocks ratio as close to the highest clock ratio
>>>>>> as possible.
>>>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>>>> profile_peak: set highest sclk and mclk, power and thermal not
>>>>>> sustainable
>>>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>>>
>>>>>
>>>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>>>
>>>>> This commit has the effect that doing
>>>>>
>>>>> echo high >
>>>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>>>
>>>>> instantly forces fan to (I guess) max, where normally it doesn't
>>>>> need anything like as fast with the clocks high when doing nothing else.
>>>>
>>>>
>>>> Ping - just in case this got missed, still the same on current
>>>> drm-next-4.11-wip
>>>
>>> Just a heads up, Rex was looking at this, but it's Chinese New Year this week.
>>
>> OK, thanks & Happy new year.
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
       [not found]                                     ` <CY4PR12MB168710F77DE9F3F5E54A102CFB430-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  2017-02-07 13:37                                       ` Andy Furniss
@ 2017-02-07 15:24                                       ` Alex Deucher
  1 sibling, 0 replies; 17+ messages in thread
From: Alex Deucher @ 2017-02-07 15:24 UTC (permalink / raw)
  To: Zhu, Rex; +Cc: Andy Furniss, amd-gfx list

On Tue, Feb 7, 2017 at 4:48 AM, Zhu, Rex <Rex.Zhu@amd.com> wrote:
>
> Not set max fan speed in high performance level because of the fan noise.

Reviewed-by: Alex Deucher <alexander.deucher@amd.com>

>
> Best Regards
> Rex
> ________________________________
> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of Zhu, Rex
> <Rex.Zhu@amd.com>
> Sent: Monday, February 6, 2017 10:09 PM
> To: Andy Furniss; Alex Deucher
> Cc: amd-gfx list
> Subject: RE: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Please see in line.
>
> Best Regards
> Rex
>
> -----Original Message-----
> From: Andy Furniss [mailto:adf.lists@gmail.com]
> Sent: Monday, February 06, 2017 8:32 PM
> To: Zhu, Rex; Alex Deucher
> Cc: amd-gfx list
> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>
> Zhu, Rex wrote:
>> Sorry for the late response.
>>
>> Yes, I set the fan speed to max in this patch when user set high
>> performance.
>>
>> Considering that: 1. set fan speed to max is helpful to let GPU run under
>> highest clock as long as possible.
>>                   2. avoid GPU rapid temperature rise in some case.
>
>>>OK, I guess you know if it's needed or not.
>>>It is somewhat annoying noise wise, maybe one day there will be a way for
>>> users to set fan back on auto?
>
> Rex: sure, you can echo "2"> /sys/class/drm/card0/device/hwmon/hwmon3(maybe
> not 3)/pwm1_enable
>
>>>Accepting I don't know how the h/w works your point 2 would imply that
>>> with perf on auto doing something like running a benchmark that pegs all the
>>> clocks high should also
>
> Rex: based on the thermal,  the fan speed.will be dynamically adjusted.
>
>>>I don't think that would be popular WRT noise.
>
> Rex: maybe I worried unnecessarily. I will change the code not to set fan
> speed to max. thanks.
>
>
>
>>
>> Best Regards
>> Rex
>>
>>
>>
>> -----Original Message-----
>> From: Andy Furniss [mailto:adf.lists@gmail.com]
>> Sent: Wednesday, February 01, 2017 11:37 PM
>> To: Alex Deucher
>> Cc: Zhu, Rex; amd-gfx list
>> Subject: Re: [PATCH 4/4] drm/amdgpu: extend profiling mode.
>>
>> Alex Deucher wrote:
>>> On Tue, Jan 31, 2017 at 6:19 PM, Andy Furniss <adf.lists@gmail.com>
>>> wrote:
>>>> Andy Furniss wrote:
>>>>>
>>>>> Rex Zhu wrote:
>>>>>>
>>>>>> in profiling mode, powerplay will fix power state as stable as
>>>>>> possible.and disable gfx cg and LBPW feature.
>>>>>>
>>>>>> profile_standard: as a prerequisite, ensure power and thermal
>>>>>> sustainable, set clocks ratio as close to the highest clock ratio
>>>>>> as possible.
>>>>>> profile_min_sclk: fix mclk as profile_normal, set lowest sclk
>>>>>> profile_min_mclk: fix sclk as profile_normal, set lowest mclk
>>>>>> profile_peak: set highest sclk and mclk, power and thermal not
>>>>>> sustainable
>>>>>> profile_exit: exit profile mode. enable gfx cg/lbpw feature.
>>>>>
>>>>>
>>>>> Testing R9 285 Tonga on drm-next-4.11-wip
>>>>>
>>>>> This commit has the effect that doing
>>>>>
>>>>> echo high >
>>>>> /sys/class/drm/card0/device/power_dpm_force_performance_level
>>>>>
>>>>> instantly forces fan to (I guess) max, where normally it doesn't
>>>>> need anything like as fast with the clocks high when doing nothing
>>>>> else.
>>>>
>>>>
>>>> Ping - just in case this got missed, still the same on current
>>>> drm-next-4.11-wip
>>>
>>> Just a heads up, Rex was looking at this, but it's Chinese New Year this
>>> week.
>>
>> OK, thanks & Happy new year.
>>
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-02-07 15:24 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-09  6:51 [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11 Rex Zhu
     [not found] ` <1483944700-3842-1-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
2017-01-09  6:51   ` [PATCH 2/4] drm/amd/powerplay: add new smu message Rex Zhu
     [not found]     ` <1483944700-3842-2-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
2017-01-09 14:18       ` Deucher, Alexander
2017-01-09  6:51   ` [PATCH 3/4] drm/amd/powerplay: refine DIDT feature in Powerplay Rex Zhu
2017-01-09  6:51   ` [PATCH 4/4] drm/amdgpu: extend profiling mode Rex Zhu
     [not found]     ` <1483944700-3842-4-git-send-email-Rex.Zhu-5C7GfCeVMHo@public.gmane.org>
2017-01-11 22:46       ` Andy Furniss
     [not found]         ` <5876B5C8.1010401-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-01-31 23:19           ` Andy Furniss
     [not found]             ` <58911B91.50008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-02-01 14:42               ` Alex Deucher
     [not found]                 ` <CADnq5_N27CHyxLZQMypKUsBGO3tfGcbW3+aO-vQM+=7NShghCQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-02-01 15:36                   ` Andy Furniss
     [not found]                     ` <5892008F.8030008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-02-06 11:56                       ` Zhu, Rex
     [not found]                         ` <CY4PR12MB1687B1342B6DE8004D0B08E7FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-02-06 12:32                           ` Andy Furniss
     [not found]                             ` <58986CC3.40307-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-02-06 14:09                               ` Zhu, Rex
     [not found]                                 ` <CY4PR12MB1687CA7CF3127858CA7C2120FB400-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-02-07  9:48                                   ` Zhu, Rex
     [not found]                                     ` <CY4PR12MB168710F77DE9F3F5E54A102CFB430-rpdhrqHFk06Y0SjTqZDccQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-02-07 13:37                                       ` Andy Furniss
2017-02-07 15:24                                       ` Alex Deucher
2017-01-09 14:11   ` [PATCH 1/4] drm/amd/powerplay: Configuring DIDT blocks only SQ enabled on Polaris11 Deucher, Alexander
     [not found]     ` <BN6PR12MB1652DFA6EFFE887DC37BC71BF7640-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-01-10  9:41       ` Zhu, Rex

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.