amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Sharma, Shashank" <shashank.sharma@amd.com>
To: Alex Deucher <alexdeucher@gmail.com>
Cc: alexander.deucher@amd.com,
	"Felix Kuehling" <felix.kuehling@amd.com>,
	amaranath.somalapuram@amd.com,
	"Christian König" <christian.koenig@amd.com>,
	amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH v3 5/5] drm/amdgpu: switch workload context to/from compute
Date: Wed, 28 Sep 2022 10:56:51 +0200	[thread overview]
Message-ID: <546cc67c-f17b-de76-afa1-2823f09120e8@amd.com> (raw)
In-Reply-To: <CADnq5_PavsH-=7hQjWn7aBSCVs30AYdNa_ixDUn7o-BRLh=78A@mail.gmail.com>



On 9/27/2022 10:40 PM, Alex Deucher wrote:
> On Tue, Sep 27, 2022 at 11:38 AM Sharma, Shashank
> <shashank.sharma@amd.com> wrote:
>>
>>
>>
>> On 9/27/2022 5:23 PM, Felix Kuehling wrote:
>>> Am 2022-09-27 um 10:58 schrieb Sharma, Shashank:
>>>> Hello Felix,
>>>>
>>>> Thank for the review comments.
>>>>
>>>> On 9/27/2022 4:48 PM, Felix Kuehling wrote:
>>>>> Am 2022-09-27 um 02:12 schrieb Christian König:
>>>>>> Am 26.09.22 um 23:40 schrieb Shashank Sharma:
>>>>>>> This patch switches the GPU workload mode to/from
>>>>>>> compute mode, while submitting compute workload.
>>>>>>>
>>>>>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>>>>>> Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
>>>>>>
>>>>>> Feel free to add my acked-by, but Felix should probably take a look
>>>>>> as well.
>>>>>
>>>>> This look OK purely from a compute perspective. But I'm concerned
>>>>> about the interaction of compute with graphics or multiple graphics
>>>>> contexts submitting work concurrently. They would constantly override
>>>>> or disable each other's workload hints.
>>>>>
>>>>> For example, you have an amdgpu_ctx with
>>>>> AMDGPU_CTX_WORKLOAD_HINT_COMPUTE (maybe Vulkan compute) and a KFD
>>>>> process that also wants the compute profile. Those could be different
>>>>> processes belonging to different users. Say, KFD enables the compute
>>>>> profile first. Then the graphics context submits a job. At the start
>>>>> of the job, the compute profile is enabled. That's a no-op because
>>>>> KFD already enabled the compute profile. When the job finishes, it
>>>>> disables the compute profile for everyone, including KFD. That's
>>>>> unexpected.
>>>>>
>>>>
>>>> In this case, it will not disable the compute profile, as the
>>>> reference counter will not be zero. The reset_profile() will only act
>>>> if the reference counter is 0.
>>>
>>> OK, I missed the reference counter.
>>>
>>>
>>>>
>>>> But I would be happy to get any inputs about a policy which can be
>>>> more sustainable and gets better outputs, for example:
>>>> - should we not allow a profile change, if a PP mode is already
>>>> applied and keep it Early bird basis ?
>>>>
>>>> For example: Policy A
>>>> - Job A sets the profile to compute
>>>> - Job B tries to set profile to 3D, but we do not allow it as job A is
>>>> not finished it yet.
>>>>
>>>> Or Policy B: Current one
>>>> - Job A sets the profile to compute
>>>> - Job B tries to set profile to 3D, and we allow it. Job A also runs
>>>> in PP 3D
>>>> - Job B finishes, but does not reset PP as reference count is not zero
>>>> due to compute
>>>> - Job  A finishes, profile reset to NONE
>>>
>>> I think this won't work. As I understand it, the
>>> amdgpu_dpm_switch_power_profile enables and disables individual
>>> profiles. Disabling the 3D profile doesn't disable the compute profile
>>> at the same time. I think you'll need one refcount per profile.
>>>
>>> Regards,
>>>     Felix
>>
>> Thanks, This is exactly what I was looking for, I think Alex's initial
>> idea was around it, but I was under the assumption that there is only
>> one HW profile in SMU which keeps on getting overwritten. This can solve
>> our problems, as I can create an array of reference counters, and will
>> disable only the profile whose reference counter goes 0.
> 
> It's been a while since I paged any of this code into my head, but I
> believe the actual workload message in the SMU is a mask where you can
> specify multiple workload types at the same time and the SMU will
> arbitrate between them internally.  E.g., the most aggressive one will
> be selected out of the ones specified.  I think in the driver we just
> set one bit at a time using the current interface.  It might be better
> to change the interface and just ref count the hint types and then
> when we call the set function look at the ref counts for each hint
> type and set the mask as appropriate.
> 
> Alex
> 

Hey Alex,
Thanks for your comment, if that is the case, this current patch series 
works straight forward, and no changes would be required. Please let me 
know if my understanding is correct:

Assumption: Order of aggression: 3D > Media > Compute

- Job 1: Requests mode compute: PP changed to compute, ref count 1
- Job 2: Requests mode media: PP changed to media, ref count 2
- Job 3: requests mode 3D: PP changed to 3D, ref count 3
- Job 1 finishes, downs ref count to 2, doesn't reset the PP as ref > 0, 
PP still 3D
- Job 3 finishes, downs ref count to 1, doesn't reset the PP as ref > 0,
PP still 3D
- Job 2 finishes, downs ref count to 0, PP changed to NONE,

In this way, every job will be operating in the Power profile of desired 
aggression or higher, and this API guarantees the execution at-least in 
the desired power profile.

- Shashank

> 
>>
>> - Shashank
>>
>>>
>>>
>>>>
>>>>
>>>> Or anything else ?
>>>>
>>>> REgards
>>>> Shashank
>>>>
>>>>
>>>>> Or you have multiple VCN contexts. When context1 finishes a job, it
>>>>> disables the VIDEO profile. But context2 still has a job on the other
>>>>> VCN engine and wants the VIDEO profile to still be enabled.
>>>>>
>>>>> Regards,
>>>>>     Felix
>>>>>
>>>>>
>>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> ---
>>>>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 14 +++++++++++---
>>>>>>>    1 file changed, 11 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>>>>> index 5e53a5293935..1caed319a448 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>    #include "amdgpu_ras.h"
>>>>>>>    #include "amdgpu_umc.h"
>>>>>>>    #include "amdgpu_reset.h"
>>>>>>> +#include "amdgpu_ctx_workload.h"
>>>>>>>      /* Total memory size in system memory and all GPU VRAM. Used to
>>>>>>>     * estimate worst case amount of memory to reserve for page tables
>>>>>>> @@ -703,9 +704,16 @@ int amdgpu_amdkfd_submit_ib(struct
>>>>>>> amdgpu_device *adev,
>>>>>>>      void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev,
>>>>>>> bool idle)
>>>>>>>    {
>>>>>>> -    amdgpu_dpm_switch_power_profile(adev,
>>>>>>> -                    PP_SMC_POWER_PROFILE_COMPUTE,
>>>>>>> -                    !idle);
>>>>>>> +    int ret;
>>>>>>> +
>>>>>>> +    if (idle)
>>>>>>> +        ret = amdgpu_clear_workload_profile(adev,
>>>>>>> AMDGPU_CTX_WORKLOAD_HINT_COMPUTE);
>>>>>>> +    else
>>>>>>> +        ret = amdgpu_set_workload_profile(adev,
>>>>>>> AMDGPU_CTX_WORKLOAD_HINT_COMPUTE);
>>>>>>> +
>>>>>>> +    if (ret)
>>>>>>> +        drm_warn(&adev->ddev, "Failed to %s power profile to
>>>>>>> compute mode\n",
>>>>>>> +             idle ? "reset" : "set");
>>>>>>>    }
>>>>>>>      bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32
>>>>>>> vmid)
>>>>>>

  parent reply	other threads:[~2022-09-28  8:57 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-26 21:40 [PATCH v3 0/5] GPU workload hints for better performance Shashank Sharma
2022-09-26 21:40 ` [PATCH v3 1/5] drm/amdgpu: add UAPI for workload hints to ctx ioctl Shashank Sharma
2022-09-27  6:07   ` Christian König
2022-09-27 14:28   ` Felix Kuehling
2023-03-21  3:05   ` Marek Olšák
2023-03-21 13:00     ` Sharma, Shashank
2023-03-21 13:54       ` Christian König
2023-03-22 14:05         ` Marek Olšák
2023-03-22 14:08           ` Christian König
2023-03-22 14:24             ` Marek Olšák
2023-03-22 14:29               ` Christian König
2023-03-22 14:36                 ` Marek Olšák
2023-03-22 14:52                   ` Alex Deucher
2023-03-22 15:11                     ` Marek Olšák
2023-03-22 14:38                 ` Sharma, Shashank
2022-09-26 21:40 ` [PATCH v3 2/5] drm/amdgpu: add new functions to set GPU power profile Shashank Sharma
2022-09-27  2:14   ` Quan, Evan
2022-09-27  7:29     ` Sharma, Shashank
2022-09-27  9:29       ` Quan, Evan
2022-09-27 10:00         ` Sharma, Shashank
2022-09-27  6:08   ` Christian König
2022-09-27  9:58   ` Lazar, Lijo
2022-09-27 11:41     ` Sharma, Shashank
2022-09-27 12:10       ` Lazar, Lijo
2022-09-27 12:23         ` Sharma, Shashank
2022-09-27 12:39           ` Lazar, Lijo
2022-09-27 12:53             ` Sharma, Shashank
2022-09-27 13:29               ` Lazar, Lijo
2022-09-27 13:47                 ` Sharma, Shashank
2022-09-27 14:00                   ` Lazar, Lijo
2022-09-27 14:20                     ` Sharma, Shashank
2022-09-27 14:34                       ` Lazar, Lijo
2022-09-27 14:50                         ` Sharma, Shashank
2022-09-27 15:20   ` Felix Kuehling
2022-09-26 21:40 ` [PATCH v3 3/5] drm/amdgpu: set GPU workload via ctx IOCTL Shashank Sharma
2022-09-27  6:09   ` Christian König
2022-09-26 21:40 ` [PATCH v3 4/5] drm/amdgpu: switch GPU workload profile Shashank Sharma
2022-09-27  6:11   ` Christian König
2022-09-27 10:03   ` Lazar, Lijo
2022-09-27 11:47     ` Sharma, Shashank
2022-09-27 12:20       ` Lazar, Lijo
2022-09-27 12:25         ` Sharma, Shashank
2022-09-27 16:33       ` Michel Dänzer
2022-09-27 17:06         ` Sharma, Shashank
2022-09-27 17:29           ` Michel Dänzer
2022-09-26 21:40 ` [PATCH v3 5/5] drm/amdgpu: switch workload context to/from compute Shashank Sharma
2022-09-27  6:12   ` Christian König
2022-09-27 14:48     ` Felix Kuehling
2022-09-27 14:58       ` Sharma, Shashank
2022-09-27 15:23         ` Felix Kuehling
2022-09-27 15:38           ` Sharma, Shashank
2022-09-27 20:40             ` Alex Deucher
2022-09-28  7:05               ` Lazar, Lijo
2022-09-28  8:56               ` Sharma, Shashank [this message]
2022-09-28  9:00                 ` Sharma, Shashank
2022-09-28 21:51                 ` Alex Deucher
2022-09-29  8:48                   ` Sharma, Shashank
2022-09-29 11:10                     ` Lazar, Lijo
2022-09-29 13:20                       ` Sharma, Shashank
2022-09-29 13:37                         ` Lazar, Lijo
2022-09-29 14:00                           ` Sharma, Shashank
2022-09-29 14:14                             ` Lazar, Lijo
2022-09-29 14:40                               ` Sharma, Shashank
2022-09-29 18:32                               ` Alex Deucher
2022-09-30  5:08                                 ` Lazar, Lijo
2022-09-30  8:37                                   ` Sharma, Shashank
2022-09-30  9:13                                     ` Lazar, Lijo
2022-09-30  9:22                                       ` Sharma, Shashank
2022-09-30  9:54                                         ` Lazar, Lijo
2022-09-30 10:09                                           ` Sharma, Shashank
2022-09-29 18:07                       ` Felix Kuehling
2022-09-30  4:46                         ` Lazar, Lijo
2022-09-27 16:24 ` [PATCH v3 0/5] GPU workload hints for better performance Michel Dänzer
2022-09-27 16:59   ` Sharma, Shashank
2022-09-27 17:13     ` Michel Dänzer
2022-09-27 17:25       ` Sharma, Shashank

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=546cc67c-f17b-de76-afa1-2823f09120e8@amd.com \
    --to=shashank.sharma@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=alexdeucher@gmail.com \
    --cc=amaranath.somalapuram@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=felix.kuehling@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).