dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit
@ 2018-03-21 18:26 Colin King
  2018-03-21 19:02 ` Joe Perches
  0 siblings, 1 reply; 3+ messages in thread
From: Colin King @ 2018-03-21 18:26 UTC (permalink / raw)
  To: Christian König, David Zhou, David Airlie, Rex Zhu, amd-gfx,
	dri-devel
  Cc: kernel-janitors, linux-kernel

From: Colin Ian King <colin.king@canonical.com>

The for-loops process data in the mclk_table but use slck_table.count
as the loop index limit.  I believe these are cut-n-paste errors from
the previous almost identical loops as indicated by static analysis.
Fix these.

Detected by CoverityScan, CID#1466001 ("Copy-paste error")

Fixes: 5d97cf39ff24 ("drm/amd/pp: Add and initialize OD_dpm_table for CI/VI.")
Fixes: 5e4d4fbea557 ("drm/amd/pp: Implement edit_dpm_table on smu7")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index df2a312ca6c9..d1983273ec7c 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
 
 	odn_table->odn_memory_clock_dpm_levels.num_of_pl =
 						data->golden_dpm_table.mclk_table.count;
-	for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
+	for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) {
 		odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
 					data->golden_dpm_table.mclk_table.dpm_levels[i].value;
 		odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true;
@@ -4735,7 +4735,7 @@ static void smu7_check_dpm_table_updated(struct pp_hwmgr *hwmgr)
 		}
 	}
 
-	for (i=0; i<data->dpm_table.sclk_table.count; i++) {
+	for (i=0; i<data->dpm_table.mclk_table.count; i++) {
 		if (odn_table->odn_memory_clock_dpm_levels.entries[i].clock !=
 					data->dpm_table.mclk_table.dpm_levels[i].value) {
 			data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_MCLK;
-- 
2.15.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit
  2018-03-21 18:26 [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit Colin King
@ 2018-03-21 19:02 ` Joe Perches
       [not found]   ` <1521658930.7999.25.camel-6d6DIl74uiNBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Joe Perches @ 2018-03-21 19:02 UTC (permalink / raw)
  To: Colin King, Christian König, David Zhou, David Airlie,
	Rex Zhu, amd-gfx, dri-devel
  Cc: kernel-janitors, linux-kernel

On Wed, 2018-03-21 at 18:26 +0000, Colin King wrote:
> From: Colin Ian King <colin.king@canonical.com>
> 
> The for-loops process data in the mclk_table but use slck_table.count
> as the loop index limit.  I believe these are cut-n-paste errors from
> the previous almost identical loops as indicated by static analysis.
> Fix these.

Nice tool.

> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
[]
> @@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
>  
>  	odn_table->odn_memory_clock_dpm_levels.num_of_pl =
>  						data->golden_dpm_table.mclk_table.count;
> -	for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
> +	for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) {
>  		odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
>  					data->golden_dpm_table.mclk_table.dpm_levels[i].value;
>  		odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true;

Probably more sensible to use temporaries too.
Maybe something like the below (also trivially reduces object size)
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index df2a312ca6c9..339b897146af 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -834,6 +834,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
 
 	struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table;
 	struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table;
+	struct phm_odn_performance_level *entries;
 
 	if (table_info == NULL)
 		return -EINVAL;
@@ -843,11 +844,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
 
 	odn_table->odn_core_clock_dpm_levels.num_of_pl =
 						data->golden_dpm_table.sclk_table.count;
+	entries = odn_table->odn_core_clock_dpm_levels.entries;
 	for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
-		odn_table->odn_core_clock_dpm_levels.entries[i].clock =
-					data->golden_dpm_table.sclk_table.dpm_levels[i].value;
-		odn_table->odn_core_clock_dpm_levels.entries[i].enabled = true;
-		odn_table->odn_core_clock_dpm_levels.entries[i].vddc = dep_sclk_table->entries[i].vddc;
+		entries[i].clock = data->golden_dpm_table.sclk_table.dpm_levels[i].value;
+		entries[i].enabled = true;
+		entries[i].vddc = dep_sclk_table->entries[i].vddc;
 	}
 
 	smu7_get_voltage_dependency_table(dep_sclk_table,
@@ -855,11 +856,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
 
 	odn_table->odn_memory_clock_dpm_levels.num_of_pl =
 						data->golden_dpm_table.mclk_table.count;
-	for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
-		odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
-					data->golden_dpm_table.mclk_table.dpm_levels[i].value;
-		odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true;
-		odn_table->odn_memory_clock_dpm_levels.entries[i].vddc = dep_mclk_table->entries[i].vddc;
+	entries = odn_table->odn_memory_clock_dpm_levels.entries;
+	for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) {
+		entries[i].clock = data->golden_dpm_table.mclk_table.dpm_levels[i].value;
+		entries[i].enabled = true;
+		entries[i].vddc = dep_mclk_table->entries[i].vddc;
 	}
 
 	smu7_get_voltage_dependency_table(dep_mclk_table,

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit
       [not found]   ` <1521658930.7999.25.camel-6d6DIl74uiNBDgjK7y7TUQ@public.gmane.org>
@ 2018-03-22 15:03     ` Zhu, Rex
  0 siblings, 0 replies; 3+ messages in thread
From: Zhu, Rex @ 2018-03-22 15:03 UTC (permalink / raw)
  To: Joe Perches, Colin King, Koenig, Christian, Zhou, David(ChunMing),
	David Airlie, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: kernel-janitors-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 4737 bytes --]

Thanks.  Will apply the patch to drm-next.


Best Regards

Rex

________________________________
From: Joe Perches <joe-6d6DIl74uiNBDgjK7y7TUQ@public.gmane.org>
Sent: Thursday, March 22, 2018 3:02 AM
To: Colin King; Koenig, Christian; Zhou, David(ChunMing); David Airlie; Zhu, Rex; amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org; dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
Cc: kernel-janitors-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit

On Wed, 2018-03-21 at 18:26 +0000, Colin King wrote:
> From: Colin Ian King <colin.king-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org>
>
> The for-loops process data in the mclk_table but use slck_table.count
> as the loop index limit.  I believe these are cut-n-paste errors from
> the previous almost identical loops as indicated by static analysis.
> Fix these.

Nice tool.

> diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
[]
> @@ -855,7 +855,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)
>
>        odn_table->odn_memory_clock_dpm_levels.num_of_pl =
>                                                data->golden_dpm_table.mclk_table.count;
> -     for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
> +     for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) {
>                odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
>                                        data->golden_dpm_table.mclk_table.dpm_levels[i].value;
>                odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true;

Probably more sensible to use temporaries too.
Maybe something like the below (also trivially reduces object size)
---
 drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
index df2a312ca6c9..339b897146af 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
@@ -834,6 +834,7 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)

         struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table;
         struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table;
+       struct phm_odn_performance_level *entries;

         if (table_info == NULL)
                 return -EINVAL;
@@ -843,11 +844,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)

         odn_table->odn_core_clock_dpm_levels.num_of_pl =
                                                 data->golden_dpm_table.sclk_table.count;
+       entries = odn_table->odn_core_clock_dpm_levels.entries;
         for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
-               odn_table->odn_core_clock_dpm_levels.entries[i].clock =
-                                       data->golden_dpm_table.sclk_table.dpm_levels[i].value;
-               odn_table->odn_core_clock_dpm_levels.entries[i].enabled = true;
-               odn_table->odn_core_clock_dpm_levels.entries[i].vddc = dep_sclk_table->entries[i].vddc;
+               entries[i].clock = data->golden_dpm_table.sclk_table.dpm_levels[i].value;
+               entries[i].enabled = true;
+               entries[i].vddc = dep_sclk_table->entries[i].vddc;
         }

         smu7_get_voltage_dependency_table(dep_sclk_table,
@@ -855,11 +856,11 @@ static int smu7_odn_initial_default_setting(struct pp_hwmgr *hwmgr)

         odn_table->odn_memory_clock_dpm_levels.num_of_pl =
                                                 data->golden_dpm_table.mclk_table.count;
-       for (i=0; i<data->golden_dpm_table.sclk_table.count; i++) {
-               odn_table->odn_memory_clock_dpm_levels.entries[i].clock =
-                                       data->golden_dpm_table.mclk_table.dpm_levels[i].value;
-               odn_table->odn_memory_clock_dpm_levels.entries[i].enabled = true;
-               odn_table->odn_memory_clock_dpm_levels.entries[i].vddc = dep_mclk_table->entries[i].vddc;
+       entries = odn_table->odn_memory_clock_dpm_levels.entries;
+       for (i=0; i<data->golden_dpm_table.mclk_table.count; i++) {
+               entries[i].clock = data->golden_dpm_table.mclk_table.dpm_levels[i].value;
+               entries[i].enabled = true;
+               entries[i].vddc = dep_mclk_table->entries[i].vddc;
         }

         smu7_get_voltage_dependency_table(dep_mclk_table,

[-- Attachment #1.2: Type: text/html, Size: 9304 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-03-22 15:03 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-21 18:26 [PATCH] drm/amd/pp: use mlck_table.count for array loop index limit Colin King
2018-03-21 19:02 ` Joe Perches
     [not found]   ` <1521658930.7999.25.camel-6d6DIl74uiNBDgjK7y7TUQ@public.gmane.org>
2018-03-22 15:03     ` Zhu, Rex

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).