All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Quan, Evan" <Evan.Quan@amd.com>
To: Alex Deucher <alexdeucher@gmail.com>
Cc: "Deucher, Alexander" <Alexander.Deucher@amd.com>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>
Subject: RE: [PATCH 2/2] drm/amd/pm: optimize the link width/speed retrieving
Date: Tue, 23 Feb 2021 02:03:36 +0000	[thread overview]
Message-ID: <DM6PR12MB2619B7625866332A0E44EB4AE4809@DM6PR12MB2619.namprd12.prod.outlook.com> (raw)
In-Reply-To: <CADnq5_N0=a_5wd1aLhvMPeX2_SnyTvA3+7tyt14Bx8mRo3-6PA@mail.gmail.com>

[AMD Official Use Only - Internal Distribution Only]

PMFW of Arcturus does not expose us those information.
So, we have to stick to current implementation(smu_v11_0_get_current_pcie_link_width/speed) for Arcturus.

Regards
Evan
-----Original Message-----
From: Alex Deucher <alexdeucher@gmail.com> 
Sent: Tuesday, February 23, 2021 5:48 AM
To: Quan, Evan <Evan.Quan@amd.com>
Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>; Deucher, Alexander <Alexander.Deucher@amd.com>
Subject: Re: [PATCH 2/2] drm/amd/pm: optimize the link width/speed retrieving

On Sun, Feb 21, 2021 at 11:04 PM Evan Quan <evan.quan@amd.com> wrote:
>
> By using the information provided by PMFW when available.
>
> Change-Id: I1afec4cd07ac9608861ee07c449e320e3f36a932
> Signed-off-by: Evan Quan <evan.quan@amd.com>

What about arcturus?
Acked-by: Alex Deucher <alexander.deucher@amd.com>

> ---
>  .../gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c   | 17 ++++++++++----
>  .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c   | 22 +++++++++++++++----
>  2 files changed, 31 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c 
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> index 29e04f33f276..7fe2876c99fe 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
> @@ -72,6 +72,8 @@
>
>  #define SMU_11_0_GFX_BUSY_THRESHOLD 15
>
> +static uint16_t link_speed[] = {25, 50, 80, 160};
> +
>  static struct cmn2asic_msg_mapping navi10_message_map[SMU_MSG_MAX_COUNT] = {
>         MSG_MAP(TestMessage,                    PPSMC_MSG_TestMessage,                  1),
>         MSG_MAP(GetSmuVersion,                  PPSMC_MSG_GetSmuVersion,                1),
> @@ -2391,10 +2393,17 @@ static ssize_t navi10_get_gpu_metrics(struct 
> smu_context *smu,
>
>         gpu_metrics->current_fan_speed = metrics.CurrFanSpeed;
>
> -       gpu_metrics->pcie_link_width =
> -                       smu_v11_0_get_current_pcie_link_width(smu);
> -       gpu_metrics->pcie_link_speed =
> -                       smu_v11_0_get_current_pcie_link_speed(smu);
> +       if (((adev->asic_type == CHIP_NAVI14) && smu_version > 0x00351F00) ||
> +             ((adev->asic_type == CHIP_NAVI12) && smu_version > 0x00341C00) ||
> +             ((adev->asic_type == CHIP_NAVI10) && smu_version > 0x002A3B00)) {
> +               gpu_metrics->pcie_link_width = (uint16_t)metrics.PcieWidth;
> +               gpu_metrics->pcie_link_speed = link_speed[metrics.PcieRate];
> +       } else {
> +               gpu_metrics->pcie_link_width =
> +                               smu_v11_0_get_current_pcie_link_width(smu);
> +               gpu_metrics->pcie_link_speed =
> +                               smu_v11_0_get_current_pcie_link_speed(smu);
> +       }
>
>         gpu_metrics->system_clock_counter = ktime_get_boottime_ns();
>
> diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c 
> b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> index e74299da1739..6fd95764c952 100644
> --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
> @@ -73,6 +73,8 @@
>
>  #define SMU_11_0_7_GFX_BUSY_THRESHOLD 15
>
> +static uint16_t link_speed[] = {25, 50, 80, 160};
> +
>  static struct cmn2asic_msg_mapping sienna_cichlid_message_map[SMU_MSG_MAX_COUNT] = {
>         MSG_MAP(TestMessage,                    PPSMC_MSG_TestMessage,                 1),
>         MSG_MAP(GetSmuVersion,                  PPSMC_MSG_GetSmuVersion,               1),
> @@ -3124,6 +3126,8 @@ static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
>         SmuMetricsExternal_t metrics_external;
>         SmuMetrics_t *metrics =
>                 &(metrics_external.SmuMetrics);
> +       struct amdgpu_device *adev = smu->adev;
> +       uint32_t smu_version;
>         int ret = 0;
>
>         ret = smu_cmn_get_metrics_table(smu, @@ -3170,10 +3174,20 @@ 
> static ssize_t sienna_cichlid_get_gpu_metrics(struct smu_context *smu,
>
>         gpu_metrics->current_fan_speed = metrics->CurrFanSpeed;
>
> -       gpu_metrics->pcie_link_width =
> -                       smu_v11_0_get_current_pcie_link_width(smu);
> -       gpu_metrics->pcie_link_speed =
> -                       smu_v11_0_get_current_pcie_link_speed(smu);
> +       ret = smu_cmn_get_smc_version(smu, NULL, &smu_version);
> +       if (ret)
> +               return ret;
> +
> +       if (((adev->asic_type == CHIP_SIENNA_CICHLID) && smu_version > 0x003A1E00) ||
> +             ((adev->asic_type == CHIP_NAVY_FLOUNDER) && smu_version > 0x00410400)) {
> +               gpu_metrics->pcie_link_width = (uint16_t)metrics->PcieWidth;
> +               gpu_metrics->pcie_link_speed = link_speed[metrics->PcieRate];
> +       } else {
> +               gpu_metrics->pcie_link_width =
> +                               smu_v11_0_get_current_pcie_link_width(smu);
> +               gpu_metrics->pcie_link_speed =
> +                               smu_v11_0_get_current_pcie_link_speed(smu);
> +       }
>
>         gpu_metrics->system_clock_counter = ktime_get_boottime_ns();
>
> --
> 2.29.0
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
> s.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cev
> an.quan%40amd.com%7Cdbf961954f9c4ef66b1308d8d77b8592%7C3dd8961fe4884e6
> 08e11a82d994e183d%7C0%7C0%7C637496272809706224%7CUnknown%7CTWFpbGZsb3d
> 8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C
> 1000&amp;sdata=29zqelJSqHdhHVMPUqag5i1Sv9mrUhHSGysA52YYQXs%3D&amp;rese
> rved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-02-23  2:03 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-22  4:03 [PATCH 1/2] drm/amd/pm: correct gpu metrics related data structures Evan Quan
2021-02-22  4:03 ` [PATCH 2/2] drm/amd/pm: optimize the link width/speed retrieving Evan Quan
2021-02-22 21:47   ` Alex Deucher
2021-02-23  2:03     ` Quan, Evan [this message]
2021-02-22 21:45 ` [PATCH 1/2] drm/amd/pm: correct gpu metrics related data structures Alex Deucher
2021-02-23 11:48   ` Tom St Denis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR12MB2619B7625866332A0E44EB4AE4809@DM6PR12MB2619.namprd12.prod.outlook.com \
    --to=evan.quan@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=alexdeucher@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.