All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10
@ 2022-01-26  4:54 Darren Powell
  2022-01-26  4:54 ` [PATCH v3 1/3] amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset Darren Powell
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Darren Powell @ 2022-01-26  4:54 UTC (permalink / raw)
  To: amd-gfx; +Cc: Darren Powell

== Description ==
Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
not been successful due to the page alignment requirement of the function. This patch
set implements a new api "emit_clock_levels" to facilitate passing both the base and
offset to the device rather than just the write pointer.
This patch is only implemented for navi10 for some clocks to seek comment on the
implementation before extending the implementation to other clock values and devices.

Needs to be verified on a platform that supports the overclocking

   (v3)
     Rewrote patchset to order patches as (API, hw impl, usecase)
   (v2)
      Update to apply on commit 801771de03
      adjust printing of empty carriage return only if size == 0
      change return from amdgpu_dpm_emit_clock_levels if emit_clock_levels not found

== Patch Summary ==
   linux: (git@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 28907fd9e048
    + f3c1d971ea17 amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset
    + c67ac3384682 amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts buffer base and write offset
    + 734cca28fed3 amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage}

== System Summary ==
 * DESKTOP(AMD FX-8350 + NAVI10(731F/ca), BIOS: F2)
  + ISO(Ubuntu 20.04.3 LTS)
  + Kernel(5.13.0-fdoagd5f-20220112-g28907fd9e0)

== Test ==
LOGFILE=pp_clk.test.log
AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}

lspci -nn | grep "VGA\|Display"  > $LOGFILE
FILES="pp_od_clk_voltage
pp_dpm_sclk
pp_dpm_mclk
pp_dpm_pcie
pp_dpm_socclk
pp_dpm_fclk
pp_dpm_dcefclk
pp_dpm_vclk
pp_dpm_dclk "

for f in $FILES
do
  echo === $f === >> $LOGFILE
  cat $HWMON_DIR/device/$f >> $LOGFILE
done
cat $LOGFILE

Darren Powell (3):
  amdgpu/pm: Implement new API function "emit" that accepts buffer base
    and write offset
  amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts
    buffer base and write offset
  amdgpu/pm: Linked emit_clock_levels to use cases
    amdgpu_get_pp_{dpm_clock,od_clk_voltage}

 .../gpu/drm/amd/include/kgd_pp_interface.h    |   1 +
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           |  21 ++
 drivers/gpu/drm/amd/pm/amdgpu_pm.c            |  49 +++--
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   4 +
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  42 +++-
 drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  14 ++
 .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 188 ++++++++++++++++++
 7 files changed, 300 insertions(+), 19 deletions(-)


base-commit: 28907fd9e04859fab86a143271d29ff0ff043154
-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/3] amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset
  2022-01-26  4:54 [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Darren Powell
@ 2022-01-26  4:54 ` Darren Powell
  2022-01-26  4:54 ` [PATCH v3 2/3] amdgpu/pm: Implemention of emit_clk_levels for navi10 " Darren Powell
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Darren Powell @ 2022-01-26  4:54 UTC (permalink / raw)
  To: amd-gfx; +Cc: Darren Powell

   (v3)
     Rewrote patchset to order patches as (API, hw impl, usecase)

     - added API for new power management function emit_clk_levels
       This function should duplicate the functionality of print_clk_levels,
       but this solution passes the buffer base and write offset down the stack.
     - new powerplay function emit_clock_levels, implemented by smu_emit_ppclk_levels()
       This function parallels the implementation of smu_print_ppclk_levels and
       calls emit_clk_levels, and allows the returns of errors
     - new helper function smu_convert_to_smuclk called by smu_print_ppclk_levels and
       smu_emit_ppclk_levels

Signed-off-by: Darren Powell <darren.powell@amd.com>
---
 .../gpu/drm/amd/include/kgd_pp_interface.h    |  1 +
 drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 21 ++++++++++
 drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |  4 ++
 drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     | 42 ++++++++++++++++---
 drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h | 14 +++++++
 5 files changed, 77 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/kgd_pp_interface.h b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
index a8eec91c0995..8a8eb9411cdf 100644
--- a/drivers/gpu/drm/amd/include/kgd_pp_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_pp_interface.h
@@ -321,6 +321,7 @@ struct amd_pm_funcs {
 	int (*get_fan_speed_pwm)(void *handle, u32 *speed);
 	int (*force_clock_level)(void *handle, enum pp_clock_type type, uint32_t mask);
 	int (*print_clock_levels)(void *handle, enum pp_clock_type type, char *buf);
+	int (*emit_clock_levels)(void *handle, enum pp_clock_type type, char *buf, int *offset);
 	int (*force_performance_level)(void *handle, enum amd_dpm_forced_level level);
 	int (*get_sclk_od)(void *handle);
 	int (*set_sclk_od)(void *handle, uint32_t value);
diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
index 728b6e10f302..03a690a3abb7 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
@@ -906,6 +906,27 @@ int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
 	return ret;
 }
 
+int amdgpu_dpm_emit_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf,
+				  int *offset)
+{
+	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
+	int ret = 0;
+
+	if (!pp_funcs->emit_clock_levels)
+		return -ENOENT;
+
+	mutex_lock(&adev->pm.mutex);
+	ret = pp_funcs->emit_clock_levels(adev->powerplay.pp_handle,
+					   type,
+					   buf,
+					   offset);
+	mutex_unlock(&adev->pm.mutex);
+
+	return ret;
+}
+
 int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
 				    uint64_t ppfeature_masks)
 {
diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
index ba857ca75392..a33dd7069258 100644
--- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
+++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h
@@ -428,6 +428,10 @@ int amdgpu_dpm_odn_edit_dpm_table(struct amdgpu_device *adev,
 int amdgpu_dpm_print_clock_levels(struct amdgpu_device *adev,
 				  enum pp_clock_type type,
 				  char *buf);
+int amdgpu_dpm_emit_clock_levels(struct amdgpu_device *adev,
+				  enum pp_clock_type type,
+				  char *buf,
+				  int *offset);
 int amdgpu_dpm_set_ppfeature_status(struct amdgpu_device *adev,
 				    uint64_t ppfeature_masks);
 int amdgpu_dpm_get_ppfeature_status(struct amdgpu_device *adev, char *buf);
diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
index c374c3067496..68430c824131 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
@@ -2387,11 +2387,8 @@ static int smu_print_smuclk_levels(struct smu_context *smu, enum smu_clk_type cl
 	return ret;
 }
 
-static int smu_print_ppclk_levels(void *handle,
-				  enum pp_clock_type type,
-				  char *buf)
+static enum smu_clk_type smu_convert_to_smuclk(enum pp_clock_type type)
 {
-	struct smu_context *smu = handle;
 	enum smu_clk_type clk_type;
 
 	switch (type) {
@@ -2424,12 +2421,46 @@ static int smu_print_ppclk_levels(void *handle,
 	case OD_CCLK:
 		clk_type = SMU_OD_CCLK; break;
 	default:
-		return -EINVAL;
+		clk_type = SMU_CLK_COUNT; break;
 	}
 
+	return clk_type;
+}
+
+static int smu_print_ppclk_levels(void *handle,
+				  enum pp_clock_type type,
+				  char *buf)
+{
+	struct smu_context *smu = handle;
+	enum smu_clk_type clk_type;
+
+	clk_type = smu_convert_to_smuclk(type);
+	if (clk_type == SMU_CLK_COUNT)
+		return -EINVAL;
+
 	return smu_print_smuclk_levels(smu, clk_type, buf);
 }
 
+static int smu_emit_ppclk_levels(void *handle, enum pp_clock_type type, char *buf, int *offset)
+{
+	struct smu_context *smu = handle;
+	enum smu_clk_type clk_type;
+	int ret = 0;
+
+	clk_type = smu_convert_to_smuclk(type);
+	if (clk_type == SMU_CLK_COUNT)
+		return -EINVAL;
+
+	if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
+		return -EOPNOTSUPP;
+
+	if (!smu->ppt_funcs->emit_clk_levels)
+		ret = -ENOENT;
+
+	return smu->ppt_funcs->emit_clk_levels(smu, clk_type, buf, offset);
+
+}
+
 static int smu_od_edit_dpm_table(void *handle,
 				 enum PP_OD_DPM_TABLE_COMMAND type,
 				 long *input, uint32_t size)
@@ -3107,6 +3138,7 @@ static const struct amd_pm_funcs swsmu_pm_funcs = {
 	.get_fan_speed_pwm   = smu_get_fan_speed_pwm,
 	.force_clock_level       = smu_force_ppclk_levels,
 	.print_clock_levels      = smu_print_ppclk_levels,
+	.emit_clock_levels       = smu_emit_ppclk_levels,
 	.force_performance_level = smu_force_performance_level,
 	.read_sensor             = smu_read_sensor,
 	.get_performance_level   = smu_get_performance_level,
diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
index 3fdab6a44901..1429581dca9c 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
@@ -612,9 +612,23 @@ struct pptable_funcs {
 	 *                    to buffer. Star current level.
 	 *
 	 * Used for sysfs interfaces.
+	 * Return: Number of characters written to the buffer
 	 */
 	int (*print_clk_levels)(struct smu_context *smu, enum smu_clk_type clk_type, char *buf);
 
+	/**
+	 * @emit_clk_levels: Print DPM clock levels for a clock domain
+	 *                    to buffer using sysfs_emit_at. Star current level.
+	 *
+	 * Used for sysfs interfaces.
+	 * &buf: sysfs buffer
+	 * &offset: offset within buffer to start printing, which is updated by the
+	 * function.
+	 *
+	 * Return: 0 on Success or Negative to indicate an error occurred.
+	 */
+	int (*emit_clk_levels)(struct smu_context *smu, enum smu_clk_type clk_type, char *buf, int *offset);
+
 	/**
 	 * @force_clk_levels: Set a range of allowed DPM levels for a clock
 	 *                    domain.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/3] amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts buffer base and write offset
  2022-01-26  4:54 [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Darren Powell
  2022-01-26  4:54 ` [PATCH v3 1/3] amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset Darren Powell
@ 2022-01-26  4:54 ` Darren Powell
  2022-01-26  4:54 ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock, od_clk_voltage} Darren Powell
  2022-02-24  4:11 ` [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Alex Deucher
  3 siblings, 0 replies; 7+ messages in thread
From: Darren Powell @ 2022-01-26  4:54 UTC (permalink / raw)
  To: amd-gfx; +Cc: Darren Powell

   (v3)
     Rewrote patchset to order patches as (API, hw impl, usecase)

     - implement emit_clk_levels for navi10, based on print_clk_levels, but
       using sysfs_emit without smu_cmn_get_sysfs() workaround

Signed-off-by: Darren Powell <darren.powell@amd.com>
---
 .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 188 ++++++++++++++++++
 1 file changed, 188 insertions(+)

diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
index 37e11716e919..4bcef7d1a0d6 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
@@ -1261,6 +1261,193 @@ static void navi10_od_setting_get_range(struct smu_11_0_overdrive_table *od_tabl
 		*max = od_table->max[setting];
 }
 
+static int navi10_emit_clk_levels(struct smu_context *smu,
+			enum smu_clk_type clk_type, char *buf, int *offset)
+{
+	uint16_t *curve_settings;
+	int ret = 0;
+	uint32_t cur_value = 0, value = 0;
+	uint32_t freq_values[3] = {0};
+	uint32_t i, levels, mark_index = 0, count = 0;
+	struct smu_table_context *table_context = &smu->smu_table;
+	uint32_t gen_speed, lane_width;
+	struct smu_dpm_context *smu_dpm = &smu->smu_dpm;
+	struct smu_11_0_dpm_context *dpm_context = smu_dpm->dpm_context;
+	PPTable_t *pptable = (PPTable_t *)table_context->driver_pptable;
+	OverDriveTable_t *od_table =
+		(OverDriveTable_t *)table_context->overdrive_table;
+	struct smu_11_0_overdrive_table *od_settings = smu->od_settings;
+	uint32_t min_value, max_value;
+
+	switch (clk_type) {
+	case SMU_GFXCLK:
+	case SMU_SCLK:
+	case SMU_SOCCLK:
+	case SMU_MCLK:
+	case SMU_UCLK:
+	case SMU_FCLK:
+	case SMU_VCLK:
+	case SMU_DCLK:
+	case SMU_DCEFCLK:
+		ret = navi10_get_current_clk_freq_by_table(smu, clk_type, &cur_value);
+		if (ret)
+			return ret;
+
+		ret = smu_v11_0_get_dpm_level_count(smu, clk_type, &count);
+		if (ret)
+			return ret;
+
+		if (!navi10_is_support_fine_grained_dpm(smu, clk_type)) {
+			for (i = 0; i < count; i++) {
+				ret = smu_v11_0_get_dpm_freq_by_index(smu, clk_type, i, &value);
+				if (ret)
+					return ret;
+
+				*offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n", i, value,
+						cur_value == value ? "*" : "");
+
+			}
+		} else {
+			ret = smu_v11_0_get_dpm_freq_by_index(smu, clk_type, 0, &freq_values[0]);
+			if (ret)
+				return ret;
+			ret = smu_v11_0_get_dpm_freq_by_index(smu, clk_type, count - 1, &freq_values[2]);
+			if (ret)
+				return ret;
+
+			freq_values[1] = cur_value;
+			mark_index = cur_value == freq_values[0] ? 0 :
+				     cur_value == freq_values[2] ? 2 : 1;
+
+			levels = 3;
+			if (mark_index != 1) {
+				levels = 2;
+				freq_values[1] = freq_values[2];
+			}
+
+			for (i = 0; i < levels; i++) {
+				*offset += sysfs_emit_at(buf, *offset, "%d: %uMhz %s\n", i, freq_values[i],
+						i == mark_index ? "*" : "");
+			}
+		}
+		break;
+	case SMU_PCIE:
+		gen_speed = smu_v11_0_get_current_pcie_link_speed_level(smu);
+		lane_width = smu_v11_0_get_current_pcie_link_width_level(smu);
+		for (i = 0; i < NUM_LINK_LEVELS; i++) {
+			*offset += sysfs_emit_at(buf, *offset, "%d: %s %s %dMhz %s\n", i,
+					(dpm_context->dpm_tables.pcie_table.pcie_gen[i] == 0) ? "2.5GT/s," :
+					(dpm_context->dpm_tables.pcie_table.pcie_gen[i] == 1) ? "5.0GT/s," :
+					(dpm_context->dpm_tables.pcie_table.pcie_gen[i] == 2) ? "8.0GT/s," :
+					(dpm_context->dpm_tables.pcie_table.pcie_gen[i] == 3) ? "16.0GT/s," : "",
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 1) ? "x1" :
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 2) ? "x2" :
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 3) ? "x4" :
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 4) ? "x8" :
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 5) ? "x12" :
+					(dpm_context->dpm_tables.pcie_table.pcie_lane[i] == 6) ? "x16" : "",
+					pptable->LclkFreq[i],
+					(gen_speed == dpm_context->dpm_tables.pcie_table.pcie_gen[i]) &&
+					(lane_width == dpm_context->dpm_tables.pcie_table.pcie_lane[i]) ?
+					"*" : "");
+		}
+		break;
+	case SMU_OD_SCLK:
+		if (!smu->od_enabled || !od_table || !od_settings)
+			return -EOPNOTSUPP;
+		if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS))
+			break;
+		*offset += sysfs_emit_at(buf, *offset, "OD_SCLK:\n0: %uMhz\n1: %uMhz\n",
+					  od_table->GfxclkFmin, od_table->GfxclkFmax);
+		break;
+	case SMU_OD_MCLK:
+		if (!smu->od_enabled || !od_table || !od_settings)
+			return -EOPNOTSUPP;
+		if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_UCLK_MAX))
+			break;
+		*offset += sysfs_emit_at(buf, *offset, "OD_MCLK:\n1: %uMHz\n", od_table->UclkFmax);
+		break;
+	case SMU_OD_VDDC_CURVE:
+		if (!smu->od_enabled || !od_table || !od_settings)
+			return -EOPNOTSUPP;
+		if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_CURVE))
+			break;
+		*offset += sysfs_emit_at(buf, *offset, "OD_VDDC_CURVE:\n");
+		for (i = 0; i < 3; i++) {
+			switch (i) {
+			case 0:
+				curve_settings = &od_table->GfxclkFreq1;
+				break;
+			case 1:
+				curve_settings = &od_table->GfxclkFreq2;
+				break;
+			case 2:
+				curve_settings = &od_table->GfxclkFreq3;
+				break;
+			default:
+				break;
+			}
+			*offset += sysfs_emit_at(buf, *offset, "%d: %uMHz %umV\n",
+						  i, curve_settings[0],
+					curve_settings[1] / NAVI10_VOLTAGE_SCALE);
+		}
+		break;
+	case SMU_OD_RANGE:
+		if (!smu->od_enabled || !od_table || !od_settings)
+			return -EOPNOTSUPP;
+		*offset += sysfs_emit_at(buf, *offset, "%s:\n", "OD_RANGE");
+
+		if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS)) {
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_GFXCLKFMIN,
+						    &min_value, NULL);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_GFXCLKFMAX,
+						    NULL, &max_value);
+			*offset+= sysfs_emit_at(buf, *offset, "SCLK: %7uMhz %10uMhz\n",
+					min_value, max_value);
+		}
+
+		if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_UCLK_MAX)) {
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_UCLKFMAX,
+						    &min_value, &max_value);
+			*offset+= sysfs_emit_at(buf, *offset, "MCLK: %7uMhz %10uMhz\n",
+					min_value, max_value);
+		}
+
+		if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_CURVE)) {
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEFREQ_P1,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_SCLK[0]: %7uMhz %10uMhz\n",
+					      min_value, max_value);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEVOLTAGE_P1,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_VOLT[0]: %7dmV %11dmV\n",
+					      min_value, max_value);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEFREQ_P2,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_SCLK[1]: %7uMhz %10uMhz\n",
+					      min_value, max_value);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEVOLTAGE_P2,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_VOLT[1]: %7dmV %11dmV\n",
+					      min_value, max_value);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEFREQ_P3,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_SCLK[2]: %7uMhz %10uMhz\n",
+					      min_value, max_value);
+			navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEVOLTAGE_P3,
+						    &min_value, &max_value);
+			*offset += sysfs_emit_at(buf, *offset, "VDDC_CURVE_VOLT[2]: %7dmV %11dmV\n",
+					      min_value, max_value);
+		}
+
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
 static int navi10_print_clk_levels(struct smu_context *smu,
 			enum smu_clk_type clk_type, char *buf)
 {
@@ -3238,6 +3425,7 @@ static const struct pptable_funcs navi10_ppt_funcs = {
 	.i2c_init = navi10_i2c_control_init,
 	.i2c_fini = navi10_i2c_control_fini,
 	.print_clk_levels = navi10_print_clk_levels,
+	.emit_clk_levels = navi10_emit_clk_levels,
 	.force_clk_levels = navi10_force_clk_levels,
 	.populate_umd_state_clk = navi10_populate_umd_state_clk,
 	.get_clock_by_type_with_latency = navi10_get_clock_by_type_with_latency,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock, od_clk_voltage}
  2022-01-26  4:54 [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Darren Powell
  2022-01-26  4:54 ` [PATCH v3 1/3] amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset Darren Powell
  2022-01-26  4:54 ` [PATCH v3 2/3] amdgpu/pm: Implemention of emit_clk_levels for navi10 " Darren Powell
@ 2022-01-26  4:54 ` Darren Powell
  2022-01-27  2:22   ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage} Quan, Evan
  2022-02-24  4:11 ` [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Alex Deucher
  3 siblings, 1 reply; 7+ messages in thread
From: Darren Powell @ 2022-01-26  4:54 UTC (permalink / raw)
  To: amd-gfx; +Cc: Darren Powell

   (v3)
     Rewrote patchset to order patches as (API, hw impl, usecase)

     - modified amdgpu_get_pp_od_clk_voltage to try amdgpu_dpm_emit_clock_levels and
       fallback to amdgpu_dpm_print_clock_levels if emit is not implemented.
     - modified amdgpu_get_pp_dpm_clock to try amdgpu_dpm_emit_clock_levels and
       fallback to amdgpu_dpm_print_clock_levels if emit is not implemented.
     - Newline is printed to buf if no output produced

 == Test ==
 LOGFILE=pp_clk.test.log
 AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
 AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
 HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}

 lspci -nn | grep "VGA\|Display"  > $LOGFILE
 FILES="pp_od_clk_voltage
 pp_dpm_sclk
 pp_dpm_mclk
 pp_dpm_pcie
 pp_dpm_socclk
 pp_dpm_fclk
 pp_dpm_dcefclk
 pp_dpm_vclk
 pp_dpm_dclk "

 for f in $FILES
 do
   echo === $f === >> $LOGFILE
   cat $HWMON_DIR/device/$f >> $LOGFILE
 done
 cat $LOGFILE

Signed-off-by: Darren Powell <darren.powell@amd.com>
---
 drivers/gpu/drm/amd/pm/amdgpu_pm.c | 49 +++++++++++++++++++++---------
 1 file changed, 35 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
index d2823aaeca09..a11def0ee761 100644
--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
@@ -832,8 +832,17 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	ssize_t size;
+	int size = 0;
 	int ret;
+	enum pp_clock_type od_clocks[6] = {
+		OD_SCLK,
+		OD_MCLK,
+		OD_VDDC_CURVE,
+		OD_RANGE,
+		OD_VDDGFX_OFFSET,
+		OD_CCLK,
+	};
+	uint clk_index;
 
 	if (amdgpu_in_reset(adev))
 		return -EPERM;
@@ -846,16 +855,25 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
 		return ret;
 	}
 
-	size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
-	if (size > 0) {
-		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf+size);
-		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf+size);
-		size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf+size);
-		size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf+size);
-		size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf+size);
-	} else {
-		size = sysfs_emit(buf, "\n");
+	for(clk_index = 0 ; clk_index < 6 ; clk_index++) {
+		ret = amdgpu_dpm_emit_clock_levels(adev, od_clocks[clk_index], buf, &size);
+		if (ret)
+			break;
+	}
+	if (ret == -ENOENT) {
+		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
+		if (size > 0) {
+			size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf+size);
+			size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf+size);
+			size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf+size);
+			size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf+size);
+			size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf+size);
+		}
 	}
+
+	if (size == 0)
+		size = sysfs_emit(buf, "\n");
+
 	pm_runtime_mark_last_busy(ddev->dev);
 	pm_runtime_put_autosuspend(ddev->dev);
 
@@ -980,8 +998,8 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct device *dev,
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = drm_to_adev(ddev);
-	ssize_t size;
-	int ret;
+	int size = 0;
+	int ret = 0;
 
 	if (amdgpu_in_reset(adev))
 		return -EPERM;
@@ -994,8 +1012,11 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct device *dev,
 		return ret;
 	}
 
-	size = amdgpu_dpm_print_clock_levels(adev, type, buf);
-	if (size <= 0)
+	ret = amdgpu_dpm_emit_clock_levels(adev, type, buf, &size);
+	if (ret == -ENOENT)
+		size = amdgpu_dpm_print_clock_levels(adev, type, buf);
+
+	if (size == 0)
 		size = sysfs_emit(buf, "\n");
 
 	pm_runtime_mark_last_busy(ddev->dev);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* RE: [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage}
  2022-01-26  4:54 ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock, od_clk_voltage} Darren Powell
@ 2022-01-27  2:22   ` Quan, Evan
  0 siblings, 0 replies; 7+ messages in thread
From: Quan, Evan @ 2022-01-27  2:22 UTC (permalink / raw)
  To: Powell, Darren, amd-gfx

[AMD Official Use Only]

Series is reviewed-by: Evan Quan <evan.quan@amd.com>

> -----Original Message-----
> From: Powell, Darren <Darren.Powell@amd.com>
> Sent: Wednesday, January 26, 2022 12:55 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Powell, Darren <Darren.Powell@amd.com>
> Subject: [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases
> amdgpu_get_pp_{dpm_clock,od_clk_voltage}
> 
>    (v3)
>      Rewrote patchset to order patches as (API, hw impl, usecase)
> 
>      - modified amdgpu_get_pp_od_clk_voltage to try
> amdgpu_dpm_emit_clock_levels and
>        fallback to amdgpu_dpm_print_clock_levels if emit is not implemented.
>      - modified amdgpu_get_pp_dpm_clock to try
> amdgpu_dpm_emit_clock_levels and
>        fallback to amdgpu_dpm_print_clock_levels if emit is not implemented.
>      - Newline is printed to buf if no output produced
> 
>  == Test ==
>  LOGFILE=pp_clk.test.log
>  AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
>  AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR |
> awk '{print $9}'`
>  HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
> 
>  lspci -nn | grep "VGA\|Display"  > $LOGFILE
>  FILES="pp_od_clk_voltage
>  pp_dpm_sclk
>  pp_dpm_mclk
>  pp_dpm_pcie
>  pp_dpm_socclk
>  pp_dpm_fclk
>  pp_dpm_dcefclk
>  pp_dpm_vclk
>  pp_dpm_dclk "
> 
>  for f in $FILES
>  do
>    echo === $f === >> $LOGFILE
>    cat $HWMON_DIR/device/$f >> $LOGFILE
>  done
>  cat $LOGFILE
> 
> Signed-off-by: Darren Powell <darren.powell@amd.com>
> ---
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c | 49 +++++++++++++++++++++--
> -------
>  1 file changed, 35 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> index d2823aaeca09..a11def0ee761 100644
> --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
> @@ -832,8 +832,17 @@ static ssize_t
> amdgpu_get_pp_od_clk_voltage(struct device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	ssize_t size;
> +	int size = 0;
>  	int ret;
> +	enum pp_clock_type od_clocks[6] = {
> +		OD_SCLK,
> +		OD_MCLK,
> +		OD_VDDC_CURVE,
> +		OD_RANGE,
> +		OD_VDDGFX_OFFSET,
> +		OD_CCLK,
> +	};
> +	uint clk_index;
> 
>  	if (amdgpu_in_reset(adev))
>  		return -EPERM;
> @@ -846,16 +855,25 @@ static ssize_t
> amdgpu_get_pp_od_clk_voltage(struct device *dev,
>  		return ret;
>  	}
> 
> -	size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
> -	if (size > 0) {
> -		size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK,
> buf+size);
> -		size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDC_CURVE, buf+size);
> -		size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDGFX_OFFSET, buf+size);
> -		size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE,
> buf+size);
> -		size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK,
> buf+size);
> -	} else {
> -		size = sysfs_emit(buf, "\n");
> +	for(clk_index = 0 ; clk_index < 6 ; clk_index++) {
> +		ret = amdgpu_dpm_emit_clock_levels(adev,
> od_clocks[clk_index], buf, &size);
> +		if (ret)
> +			break;
> +	}
> +	if (ret == -ENOENT) {
> +		size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
> +		if (size > 0) {
> +			size += amdgpu_dpm_print_clock_levels(adev,
> OD_MCLK, buf+size);
> +			size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDC_CURVE, buf+size);
> +			size += amdgpu_dpm_print_clock_levels(adev,
> OD_VDDGFX_OFFSET, buf+size);
> +			size += amdgpu_dpm_print_clock_levels(adev,
> OD_RANGE, buf+size);
> +			size += amdgpu_dpm_print_clock_levels(adev,
> OD_CCLK, buf+size);
> +		}
>  	}
> +
> +	if (size == 0)
> +		size = sysfs_emit(buf, "\n");
> +
>  	pm_runtime_mark_last_busy(ddev->dev);
>  	pm_runtime_put_autosuspend(ddev->dev);
> 
> @@ -980,8 +998,8 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct
> device *dev,
>  {
>  	struct drm_device *ddev = dev_get_drvdata(dev);
>  	struct amdgpu_device *adev = drm_to_adev(ddev);
> -	ssize_t size;
> -	int ret;
> +	int size = 0;
> +	int ret = 0;
> 
>  	if (amdgpu_in_reset(adev))
>  		return -EPERM;
> @@ -994,8 +1012,11 @@ static ssize_t amdgpu_get_pp_dpm_clock(struct
> device *dev,
>  		return ret;
>  	}
> 
> -	size = amdgpu_dpm_print_clock_levels(adev, type, buf);
> -	if (size <= 0)
> +	ret = amdgpu_dpm_emit_clock_levels(adev, type, buf, &size);
> +	if (ret == -ENOENT)
> +		size = amdgpu_dpm_print_clock_levels(adev, type, buf);
> +
> +	if (size == 0)
>  		size = sysfs_emit(buf, "\n");
> 
>  	pm_runtime_mark_last_busy(ddev->dev);
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10
  2022-01-26  4:54 [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Darren Powell
                   ` (2 preceding siblings ...)
  2022-01-26  4:54 ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock, od_clk_voltage} Darren Powell
@ 2022-02-24  4:11 ` Alex Deucher
  2022-02-26 21:32   ` Powell, Darren
  3 siblings, 1 reply; 7+ messages in thread
From: Alex Deucher @ 2022-02-24  4:11 UTC (permalink / raw)
  To: Darren Powell; +Cc: amd-gfx list

On Tue, Jan 25, 2022 at 11:55 PM Darren Powell <darren.powell@amd.com> wrote:
>
> == Description ==
> Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
> not been successful due to the page alignment requirement of the function. This patch
> set implements a new api "emit_clock_levels" to facilitate passing both the base and
> offset to the device rather than just the write pointer.
> This patch is only implemented for navi10 for some clocks to seek comment on the
> implementation before extending the implementation to other clock values and devices.

Were you planning to extend this to other asics?

Alex

>
> Needs to be verified on a platform that supports the overclocking
>
>    (v3)
>      Rewrote patchset to order patches as (API, hw impl, usecase)
>    (v2)
>       Update to apply on commit 801771de03
>       adjust printing of empty carriage return only if size == 0
>       change return from amdgpu_dpm_emit_clock_levels if emit_clock_levels not found
>
> == Patch Summary ==
>    linux: (git@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 28907fd9e048
>     + f3c1d971ea17 amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset
>     + c67ac3384682 amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts buffer base and write offset
>     + 734cca28fed3 amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage}
>
> == System Summary ==
>  * DESKTOP(AMD FX-8350 + NAVI10(731F/ca), BIOS: F2)
>   + ISO(Ubuntu 20.04.3 LTS)
>   + Kernel(5.13.0-fdoagd5f-20220112-g28907fd9e0)
>
> == Test ==
> LOGFILE=pp_clk.test.log
> AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
> AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
> HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
>
> lspci -nn | grep "VGA\|Display"  > $LOGFILE
> FILES="pp_od_clk_voltage
> pp_dpm_sclk
> pp_dpm_mclk
> pp_dpm_pcie
> pp_dpm_socclk
> pp_dpm_fclk
> pp_dpm_dcefclk
> pp_dpm_vclk
> pp_dpm_dclk "
>
> for f in $FILES
> do
>   echo === $f === >> $LOGFILE
>   cat $HWMON_DIR/device/$f >> $LOGFILE
> done
> cat $LOGFILE
>
> Darren Powell (3):
>   amdgpu/pm: Implement new API function "emit" that accepts buffer base
>     and write offset
>   amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts
>     buffer base and write offset
>   amdgpu/pm: Linked emit_clock_levels to use cases
>     amdgpu_get_pp_{dpm_clock,od_clk_voltage}
>
>  .../gpu/drm/amd/include/kgd_pp_interface.h    |   1 +
>  drivers/gpu/drm/amd/pm/amdgpu_dpm.c           |  21 ++
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c            |  49 +++--
>  drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   4 +
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  42 +++-
>  drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  14 ++
>  .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 188 ++++++++++++++++++
>  7 files changed, 300 insertions(+), 19 deletions(-)
>
>
> base-commit: 28907fd9e04859fab86a143271d29ff0ff043154
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10
  2022-02-24  4:11 ` [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Alex Deucher
@ 2022-02-26 21:32   ` Powell, Darren
  0 siblings, 0 replies; 7+ messages in thread
From: Powell, Darren @ 2022-02-26 21:32 UTC (permalink / raw)
  To: Alex Deucher; +Cc: amd-gfx list

[-- Attachment #1: Type: text/plain, Size: 3617 bytes --]

[AMD Official Use Only]

Plan is to now commence with VEGA20 implementation and then continue to transfer other platforms after that patch is RB'd
Thanks
Darren
________________________________
From: Alex Deucher <alexdeucher@gmail.com>
Sent: Wednesday, February 23, 2022 11:11 PM
To: Powell, Darren <Darren.Powell@amd.com>
Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>
Subject: Re: [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10

On Tue, Jan 25, 2022 at 11:55 PM Darren Powell <darren.powell@amd.com> wrote:
>
> == Description ==
> Scnprintf use within the kernel is not recommended, but simple sysfs_emit replacement has
> not been successful due to the page alignment requirement of the function. This patch
> set implements a new api "emit_clock_levels" to facilitate passing both the base and
> offset to the device rather than just the write pointer.
> This patch is only implemented for navi10 for some clocks to seek comment on the
> implementation before extending the implementation to other clock values and devices.

Were you planning to extend this to other asics?

Alex

>
> Needs to be verified on a platform that supports the overclocking
>
>    (v3)
>      Rewrote patchset to order patches as (API, hw impl, usecase)
>    (v2)
>       Update to apply on commit 801771de03
>       adjust printing of empty carriage return only if size == 0
>       change return from amdgpu_dpm_emit_clock_levels if emit_clock_levels not found
>
> == Patch Summary ==
>    linux: (git@gitlab.freedesktop.org:agd5f) origin/amd-staging-drm-next @ 28907fd9e048
>     + f3c1d971ea17 amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset
>     + c67ac3384682 amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts buffer base and write offset
>     + 734cca28fed3 amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage}
>
> == System Summary ==
>  * DESKTOP(AMD FX-8350 + NAVI10(731F/ca), BIOS: F2)
>   + ISO(Ubuntu 20.04.3 LTS)
>   + Kernel(5.13.0-fdoagd5f-20220112-g28907fd9e0)
>
> == Test ==
> LOGFILE=pp_clk.test.log
> AMDGPU_PCI_ADDR=`lspci -nn | grep "VGA\|Display" | cut -d " " -f 1`
> AMDGPU_HWMON=`ls -la /sys/class/hwmon | grep $AMDGPU_PCI_ADDR | awk '{print $9}'`
> HWMON_DIR=/sys/class/hwmon/${AMDGPU_HWMON}
>
> lspci -nn | grep "VGA\|Display"  > $LOGFILE
> FILES="pp_od_clk_voltage
> pp_dpm_sclk
> pp_dpm_mclk
> pp_dpm_pcie
> pp_dpm_socclk
> pp_dpm_fclk
> pp_dpm_dcefclk
> pp_dpm_vclk
> pp_dpm_dclk "
>
> for f in $FILES
> do
>   echo === $f === >> $LOGFILE
>   cat $HWMON_DIR/device/$f >> $LOGFILE
> done
> cat $LOGFILE
>
> Darren Powell (3):
>   amdgpu/pm: Implement new API function "emit" that accepts buffer base
>     and write offset
>   amdgpu/pm: Implemention of emit_clk_levels for navi10 that accepts
>     buffer base and write offset
>   amdgpu/pm: Linked emit_clock_levels to use cases
>     amdgpu_get_pp_{dpm_clock,od_clk_voltage}
>
>  .../gpu/drm/amd/include/kgd_pp_interface.h    |   1 +
>  drivers/gpu/drm/amd/pm/amdgpu_dpm.c           |  21 ++
>  drivers/gpu/drm/amd/pm/amdgpu_pm.c            |  49 +++--
>  drivers/gpu/drm/amd/pm/inc/amdgpu_dpm.h       |   4 +
>  drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c     |  42 +++-
>  drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  14 ++
>  .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 188 ++++++++++++++++++
>  7 files changed, 300 insertions(+), 19 deletions(-)
>
>
> base-commit: 28907fd9e04859fab86a143271d29ff0ff043154
> --
> 2.34.1
>

[-- Attachment #2: Type: text/html, Size: 5936 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-02-26 21:32 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-26  4:54 [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Darren Powell
2022-01-26  4:54 ` [PATCH v3 1/3] amdgpu/pm: Implement new API function "emit" that accepts buffer base and write offset Darren Powell
2022-01-26  4:54 ` [PATCH v3 2/3] amdgpu/pm: Implemention of emit_clk_levels for navi10 " Darren Powell
2022-01-26  4:54 ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock, od_clk_voltage} Darren Powell
2022-01-27  2:22   ` [PATCH v3 3/3] amdgpu/pm: Linked emit_clock_levels to use cases amdgpu_get_pp_{dpm_clock,od_clk_voltage} Quan, Evan
2022-02-24  4:11 ` [PATCH v3 0/3] amdgpu/pm: Implement parallel sysfs_emit solution for navi10 Alex Deucher
2022-02-26 21:32   ` Powell, Darren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.