* [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
@ 2022-03-13 5:28 Darren Powell
2022-03-13 5:28 ` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels" Darren Powell
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Darren Powell @ 2022-03-13 5:28 UTC (permalink / raw)
To: amd-gfx; +Cc: Darren Powell
== Description ==
Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
not been successful due to the page alignment requirement of the function. This patch
set implements a new api "emit_clock_levels" to facilitate passing both the base and
offset to the device rather than just the write pointer.
The emit_clock_levels API for amdgpu_dpm has been duplicated to pp_dpm, based on the patch
commit 7f36948c92b2 ("amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset")
and vega10_emit_clock_levels has been implemented with sysfs_emit based on vega10_print_clock_levels
== Patch Summary ==
linux: (git@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 6b6b9c625004
+ e94021f6c08c amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
+ d83131987718 amdgpu/pm: Implement emit_clk_levels for vega10
== System Summary ==
* DESKTOP(AMD FX-8350 + VEGA10(687f/c3), BIOS: F2)
+ ISO(Ubuntu 20.04.4 LTS)
+ Kernel(5.16.0-20220307-fdoagd5f-g6b6b9c625004)
+ Overdrive Enabled(amdgpu.ppfeaturemask |= 0x4000)
== Test ==
AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
lspci -nn | grep "VGA\|Display" > $LOGFILE
printf 'OD enabled = %X\n' "$(( `cat /sys/module/amdgpu/parameters/ppfeaturemask` & 0x4000 ))" >> $LOGFILE
FILES="pp_od_clk_voltage
pp_dpm_sclk
pp_dpm_mclk
pp_dpm_pcie
pp_dpm_socclk
pp_dpm_fclk
pp_dpm_dcefclk
pp_dpm_vclk
pp_dpm_dclk "
for f in $FILES
do
echo === $f === >> $LOGFILE
cat $HWMON_DIR/device/$f >> $LOGFILE
done
cat $LOGFILE
Darren Powell (2):
amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
amdgpu/pm: Implement emit_clk_levels for vega10
.../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 17 ++
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 147 ++++++++++++++++++
drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h | 2 +
3 files changed, 166 insertions(+)
base-commit: 6b6b9c625004e07e000ca1918cadcb64439eb498
--
2.35.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
2022-03-13 5:28 [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10 Darren Powell
@ 2022-03-13 5:28 ` Darren Powell
2022-03-13 5:28 ` [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10 Darren Powell
2022-03-24 20:23 ` [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution " Powell, Darren
2 siblings, 0 replies; 5+ messages in thread
From: Darren Powell @ 2022-03-13 5:28 UTC (permalink / raw)
To: amd-gfx; +Cc: Darren Powell
Extend commit 7f36948c92b2 ("amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset")
Add new hwmgr API function "emit_clock_levels"
- add member emit_clock_levels to pp_hwmgr_func
- Implemented pp_dpm_emit_clock_levels
- add pp_dpm_emit_clock_levels to pp_dpm_funcs
Signed-off-by: Darren Powell <darren.powell@amd.com>
---
.../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 17 +++++++++++++++++
drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h | 2 ++
2 files changed, 19 insertions(+)
diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
index a2da46bf3985..dbed72c1e0c6 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
@@ -671,6 +671,22 @@ static int pp_dpm_force_clock_level(void *handle,
return hwmgr->hwmgr_func->force_clock_level(hwmgr, type, mask);
}
+static int pp_dpm_emit_clock_levels(void *handle,
+ enum pp_clock_type type,
+ char *buf,
+ int *offset)
+{
+ struct pp_hwmgr *hwmgr = handle;
+
+ if (!hwmgr || !hwmgr->pm_en)
+ return -EOPNOTSUPP;
+
+ if (!hwmgr->hwmgr_func->emit_clock_levels)
+ return -ENOENT;
+
+ return hwmgr->hwmgr_func->emit_clock_levels(hwmgr, type, buf, offset);
+}
+
static int pp_dpm_print_clock_levels(void *handle,
enum pp_clock_type type, char *buf)
{
@@ -1535,6 +1551,7 @@ static const struct amd_pm_funcs pp_dpm_funcs = {
.get_pp_table = pp_dpm_get_pp_table,
.set_pp_table = pp_dpm_set_pp_table,
.force_clock_level = pp_dpm_force_clock_level,
+ .emit_clock_levels = pp_dpm_emit_clock_levels,
.print_clock_levels = pp_dpm_print_clock_levels,
.get_sclk_od = pp_dpm_get_sclk_od,
.set_sclk_od = pp_dpm_set_sclk_od,
diff --git a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
index 4f7f2f455301..27f8d0e0e6a8 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h
@@ -313,6 +313,8 @@ struct pp_hwmgr_func {
int (*get_max_high_clocks)(struct pp_hwmgr *hwmgr, struct amd_pp_simple_clock_info *clocks);
int (*power_off_asic)(struct pp_hwmgr *hwmgr);
int (*force_clock_level)(struct pp_hwmgr *hwmgr, enum pp_clock_type type, uint32_t mask);
+ int (*emit_clock_levels)(struct pp_hwmgr *hwmgr,
+ enum pp_clock_type type, char *buf, int *offset);
int (*print_clock_levels)(struct pp_hwmgr *hwmgr, enum pp_clock_type type, char *buf);
int (*powergate_gfx)(struct pp_hwmgr *hwmgr, bool enable);
int (*get_sclk_od)(struct pp_hwmgr *hwmgr);
--
2.35.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10
2022-03-13 5:28 [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10 Darren Powell
2022-03-13 5:28 ` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels" Darren Powell
@ 2022-03-13 5:28 ` Darren Powell
2022-03-24 20:23 ` [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution " Powell, Darren
2 siblings, 0 replies; 5+ messages in thread
From: Darren Powell @ 2022-03-13 5:28 UTC (permalink / raw)
To: amd-gfx; +Cc: Darren Powell
(v1)
- implement emit_clk_levels for vega10, based on print_clk_levels,
but using sysfs_emit rather than sprintf
- modify local int vars to use uint32_t to match arg type of
called functions
- add return of error codes
- refactor OD_XXX cases to return early with -EOPNOTSUPP if
!(hwmgr->od_enabled)
Signed-off-by: Darren Powell <darren.powell@amd.com>
---
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 147 ++++++++++++++++++
1 file changed, 147 insertions(+)
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
index 3f040be0d158..a25b984fceb5 100644
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
@@ -4625,6 +4625,152 @@ static int vega10_get_current_pcie_link_speed_level(struct pp_hwmgr *hwmgr)
>> PSWUSP0_PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT;
}
+static int vega10_emit_clock_levels(struct pp_hwmgr *hwmgr,
+ enum pp_clock_type type, char *buf, int *offset)
+{
+ struct vega10_hwmgr *data = hwmgr->backend;
+ struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
+ struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
+ struct vega10_single_dpm_table *soc_table = &(data->dpm_table.soc_table);
+ struct vega10_single_dpm_table *dcef_table = &(data->dpm_table.dcef_table);
+ struct vega10_odn_clock_voltage_dependency_table *podn_vdd_dep = NULL;
+ uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width;
+ PPTable_t *pptable = &(data->smc_state_table.pp_table);
+
+ uint32_t i, now, count = 0;
+ int ret = 0;
+
+ switch (type) {
+ case PP_SCLK:
+ if (data->registry_data.sclk_dpm_key_disabled)
+ return -EOPNOTSUPP;
+
+ ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentGfxclkIndex, &now);
+ if (unlikely(ret != 0))
+ return ret;
+
+ if (hwmgr->pp_one_vf &&
+ (hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK))
+ count = 5;
+ else
+ count = sclk_table->count;
+ for (i = 0; i < count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n",
+ i, sclk_table->dpm_levels[i].value / 100,
+ (i == now) ? "*" : "");
+ break;
+ case PP_MCLK:
+ if (data->registry_data.mclk_dpm_key_disabled)
+ return -EOPNOTSUPP;
+
+ ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentUclkIndex, &now);
+ if (unlikely(ret != 0))
+ return ret;
+
+ for (i = 0; i < mclk_table->count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n",
+ i, mclk_table->dpm_levels[i].value / 100,
+ (i == now) ? "*" : "");
+ break;
+ case PP_SOCCLK:
+ if (data->registry_data.socclk_dpm_key_disabled)
+ return -EOPNOTSUPP;
+
+ ret = smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetCurrentSocclkIndex, &now);
+ if (unlikely(ret != 0))
+ return ret;
+
+ for (i = 0; i < soc_table->count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n",
+ i, soc_table->dpm_levels[i].value / 100,
+ (i == now) ? "*" : "");
+ break;
+ case PP_DCEFCLK:
+ if (data->registry_data.dcefclk_dpm_key_disabled)
+ return -EOPNOTSUPP;
+
+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_GetClockFreqMHz,
+ CLK_DCEFCLK, &now);
+ if (unlikely(ret != 0))
+ return ret;
+
+ for (i = 0; i < dcef_table->count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n",
+ i, dcef_table->dpm_levels[i].value / 100,
+ (dcef_table->dpm_levels[i].value / 100 == now) ?
+ "*" : "");
+ break;
+ case PP_PCIE:
+ current_gen_speed =
+ vega10_get_current_pcie_link_speed_level(hwmgr);
+ current_lane_width =
+ vega10_get_current_pcie_link_width_level(hwmgr);
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
+ gen_speed = pptable->PcieGenSpeed[i];
+ lane_width = pptable->PcieLaneCount[i];
+
+ *offset += sysfs_emit_at(buf, *offset, "%d: %s %s %s\n", i,
+ (gen_speed == 0) ? "2.5GT/s," :
+ (gen_speed == 1) ? "5.0GT/s," :
+ (gen_speed == 2) ? "8.0GT/s," :
+ (gen_speed == 3) ? "16.0GT/s," : "",
+ (lane_width == 1) ? "x1" :
+ (lane_width == 2) ? "x2" :
+ (lane_width == 3) ? "x4" :
+ (lane_width == 4) ? "x8" :
+ (lane_width == 5) ? "x12" :
+ (lane_width == 6) ? "x16" : "",
+ (current_gen_speed == gen_speed) &&
+ (current_lane_width == lane_width) ?
+ "*" : "");
+ }
+ break;
+
+ case OD_SCLK:
+ if (!hwmgr->od_enabled)
+ return -EOPNOTSUPP;
+
+ *offset += sysfs_emit_at(buf, *offset, "%s:\n", "OD_SCLK");
+ podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_sclk;
+ for (i = 0; i < podn_vdd_dep->count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %10uMhz %10umV\n",
+ i, podn_vdd_dep->entries[i].clk / 100,
+ podn_vdd_dep->entries[i].vddc);
+ break;
+ case OD_MCLK:
+ if (!hwmgr->od_enabled)
+ return -EOPNOTSUPP;
+
+ *offset += sysfs_emit_at(buf, *offset, "%s:\n", "OD_MCLK");
+ podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_mclk;
+ for (i = 0; i < podn_vdd_dep->count; i++)
+ *offset += sysfs_emit_at(buf, *offset, "%d: %10uMhz %10umV\n",
+ i, podn_vdd_dep->entries[i].clk/100,
+ podn_vdd_dep->entries[i].vddc);
+ break;
+ case OD_RANGE:
+ if (!hwmgr->od_enabled)
+ return -EOPNOTSUPP;
+
+ *offset += sysfs_emit_at(buf, *offset, "%s:\n", "OD_RANGE");
+ *offset += sysfs_emit_at(buf, *offset, "SCLK: %7uMHz %10uMHz\n",
+ data->golden_dpm_table.gfx_table.dpm_levels[0].value/100,
+ hwmgr->platform_descriptor.overdriveLimit.engineClock/100);
+ *offset += sysfs_emit_at(buf, *offset, "MCLK: %7uMHz %10uMHz\n",
+ data->golden_dpm_table.mem_table.dpm_levels[0].value/100,
+ hwmgr->platform_descriptor.overdriveLimit.memoryClock/100);
+ *offset += sysfs_emit_at(buf, *offset, "VDDC: %7umV %11umV\n",
+ data->odn_dpm_table.min_vddc,
+ data->odn_dpm_table.max_vddc);
+ break;
+ default:
+ ret = -ENOENT;
+ break;
+ }
+ return ret;
+}
+
static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
enum pp_clock_type type, char *buf)
{
@@ -5551,6 +5697,7 @@ static const struct pp_hwmgr_func vega10_hwmgr_funcs = {
.set_watermarks_for_clocks_ranges = vega10_set_watermarks_for_clocks_ranges,
.display_clock_voltage_request = vega10_display_clock_voltage_request,
.force_clock_level = vega10_force_clock_level,
+ .emit_clock_levels = vega10_emit_clock_levels,
.print_clock_levels = vega10_print_clock_levels,
.display_config_changed = vega10_display_configuration_changed_task,
.powergate_uvd = vega10_power_gate_uvd,
--
2.35.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
2022-03-13 5:28 [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10 Darren Powell
2022-03-13 5:28 ` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels" Darren Powell
2022-03-13 5:28 ` [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10 Darren Powell
@ 2022-03-24 20:23 ` Powell, Darren
2022-03-25 1:38 ` Quan, Evan
2 siblings, 1 reply; 5+ messages in thread
From: Powell, Darren @ 2022-03-24 20:23 UTC (permalink / raw)
To: amd-gfx; +Cc: Quan, Evan
[-- Attachment #1: Type: text/plain, Size: 2879 bytes --]
[AMD Official Use Only]
PING?
** https://lore.kernel.org/amd-gfx/20220313052839.5777-1-darren.powell@amd.com/T/#u
[PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
2022-03-13 5:28 UTC (3+ messages)
` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
` [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10
Thanks
Darren
________________________________
From: Powell, Darren <Darren.Powell@amd.com>
Sent: Sunday, March 13, 2022 12:28 AM
To: amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
Cc: Powell, Darren <Darren.Powell@amd.com>
Subject: [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
== Description ==
Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
not been successful due to the page alignment requirement of the function. This patch
set implements a new api "emit_clock_levels" to facilitate passing both the base and
offset to the device rather than just the write pointer.
The emit_clock_levels API for amdgpu_dpm has been duplicated to pp_dpm, based on the patch
commit 7f36948c92b2 ("amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset")
and vega10_emit_clock_levels has been implemented with sysfs_emit based on vega10_print_clock_levels
== Patch Summary ==
linux: (git@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 6b6b9c625004
+ e94021f6c08c amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
+ d83131987718 amdgpu/pm: Implement emit_clk_levels for vega10
== System Summary ==
* DESKTOP(AMD FX-8350 + VEGA10(687f/c3), BIOS: F2)
+ ISO(Ubuntu 20.04.4 LTS)
+ Kernel(5.16.0-20220307-fdoagd5f-g6b6b9c625004)
+ Overdrive Enabled(amdgpu.ppfeaturemask |= 0x4000)
== Test ==
AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
lspci -nn | grep "VGA\|Display" > $LOGFILE
printf 'OD enabled = %X\n' "$(( `cat /sys/module/amdgpu/parameters/ppfeaturemask` & 0x4000 ))" >> $LOGFILE
FILES="pp_od_clk_voltage
pp_dpm_sclk
pp_dpm_mclk
pp_dpm_pcie
pp_dpm_socclk
pp_dpm_fclk
pp_dpm_dcefclk
pp_dpm_vclk
pp_dpm_dclk "
for f in $FILES
do
echo === $f === >> $LOGFILE
cat $HWMON_DIR/device/$f >> $LOGFILE
done
cat $LOGFILE
Darren Powell (2):
amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
amdgpu/pm: Implement emit_clk_levels for vega10
.../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 17 ++
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 147 ++++++++++++++++++
drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h | 2 +
3 files changed, 166 insertions(+)
base-commit: 6b6b9c625004e07e000ca1918cadcb64439eb498
--
2.35.1
[-- Attachment #2: Type: text/html, Size: 4373 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
2022-03-24 20:23 ` [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution " Powell, Darren
@ 2022-03-25 1:38 ` Quan, Evan
0 siblings, 0 replies; 5+ messages in thread
From: Quan, Evan @ 2022-03-25 1:38 UTC (permalink / raw)
To: Powell, Darren, amd-gfx
[-- Attachment #1: Type: text/plain, Size: 3420 bytes --]
[AMD Official Use Only]
Seems fine to me.
Series is reviewed-by: Evan Quan <evan.quan@amd.com>
Thanks,
Evan
From: Powell, Darren <Darren.Powell@amd.com>
Sent: Friday, March 25, 2022 4:24 AM
To: amd-gfx@lists.freedesktop.org
Cc: Quan, Evan <Evan.Quan@amd.com>
Subject: Re: [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
[AMD Official Use Only]
PING?
** https://lore.kernel.org/amd-gfx/20220313052839.5777-1-darren.powell@amd.com/T/#u
[PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
2022-03-13 5:28 UTC (3+ messages)
` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
` [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10
Thanks
Darren
________________________________
From: Powell, Darren <Darren.Powell@amd.com<mailto:Darren.Powell@amd.com>>
Sent: Sunday, March 13, 2022 12:28 AM
To: amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org> <amd-gfx@lists.freedesktop.org<mailto:amd-gfx@lists.freedesktop.org>>
Cc: Powell, Darren <Darren.Powell@amd.com<mailto:Darren.Powell@amd.com>>
Subject: [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10
== Description ==
Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
not been successful due to the page alignment requirement of the function. This patch
set implements a new api "emit_clock_levels" to facilitate passing both the base and
offset to the device rather than just the write pointer.
The emit_clock_levels API for amdgpu_dpm has been duplicated to pp_dpm, based on the patch
commit 7f36948c92b2 ("amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset")
and vega10_emit_clock_levels has been implemented with sysfs_emit based on vega10_print_clock_levels
== Patch Summary ==
linux: (git@gitlab.freedesktop.org:agd5f<mailto:git@gitlab.freedesktop.org:agd5f>) origin/amd-staging-drm-next @ 6b6b9c625004
+ e94021f6c08c amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
+ d83131987718 amdgpu/pm: Implement emit_clk_levels for vega10
== System Summary ==
* DESKTOP(AMD FX-8350 + VEGA10(687f/c3), BIOS: F2)
+ ISO(Ubuntu 20.04.4 LTS)
+ Kernel(5.16.0-20220307-fdoagd5f-g6b6b9c625004)
+ Overdrive Enabled(amdgpu.ppfeaturemask |= 0x4000)
== Test ==
AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
lspci -nn | grep "VGA\|Display" > $LOGFILE
printf 'OD enabled = %X\n' "$(( `cat /sys/module/amdgpu/parameters/ppfeaturemask` & 0x4000 ))" >> $LOGFILE
FILES="pp_od_clk_voltage
pp_dpm_sclk
pp_dpm_mclk
pp_dpm_pcie
pp_dpm_socclk
pp_dpm_fclk
pp_dpm_dcefclk
pp_dpm_vclk
pp_dpm_dclk "
for f in $FILES
do
echo === $f === >> $LOGFILE
cat $HWMON_DIR/device/$f >> $LOGFILE
done
cat $LOGFILE
Darren Powell (2):
amdgpu/pm: Add new hwmgr API function "emit_clock_levels"
amdgpu/pm: Implement emit_clk_levels for vega10
.../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 17 ++
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 147 ++++++++++++++++++
drivers/gpu/drm/amd/pm/powerplay/inc/hwmgr.h | 2 +
3 files changed, 166 insertions(+)
base-commit: 6b6b9c625004e07e000ca1918cadcb64439eb498
--
2.35.1
[-- Attachment #2: Type: text/html, Size: 8462 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-03-25 1:38 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-13 5:28 [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution for vega10 Darren Powell
2022-03-13 5:28 ` [PATCH 1/2] amdgpu/pm: Add new hwmgr API function "emit_clock_levels" Darren Powell
2022-03-13 5:28 ` [PATCH 2/2] amdgpu/pm: Implement emit_clk_levels for vega10 Darren Powell
2022-03-24 20:23 ` [PATCH v1 0/2] amdgpu/pm: Implement parallel sysfs_emit solution " Powell, Darren
2022-03-25 1:38 ` Quan, Evan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.