From: "Lazar, Lijo" <lijo.lazar@amd.com>
To: "Sharma, Shashank" <shashank.sharma@amd.com>,
Alex Deucher <alexdeucher@gmail.com>
Cc: alexander.deucher@amd.com,
"Felix Kuehling" <felix.kuehling@amd.com>,
amaranath.somalapuram@amd.com,
"Christian König" <christian.koenig@amd.com>,
amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH v3 5/5] drm/amdgpu: switch workload context to/from compute
Date: Thu, 29 Sep 2022 19:44:37 +0530 [thread overview]
Message-ID: <46b27eb5-1807-044f-111a-bb67622e7fd6@amd.com> (raw)
In-Reply-To: <fc87e88d-4de0-f307-1fb2-658df564a3a1@amd.com>
On 9/29/2022 7:30 PM, Sharma, Shashank wrote:
>
>
> On 9/29/2022 3:37 PM, Lazar, Lijo wrote:
>> To be clear your understanding -
>>
>> Nothing is automatic in PMFW. PMFW picks a priority based on the
>> actual mask sent by driver.
>>
>> Assuming lower bits corresponds to highest priority -
>>
>> If driver sends a mask with Bit3 and Bit 0 set, PMFW will chose
>> profile that corresponds to Bit0. If driver sends a mask with Bit4
>> Bit2 set and rest unset, PMFW will chose profile that corresponds to
>> Bit2. However if driver sends a mask only with a single bit set, it
>> chooses the profile regardless of whatever was the previous profile. t
>> doesn't check if the existing profile > newly requested one. That is
>> the behavior.
>>
>> So if a job send chooses a profile that corresponds to Bit0, driver
>> will send that. Next time if another job chooses a profile that
>> corresponds to Bit1, PMFW will receive that as the new profile and
>> switch to that. It trusts the driver to send the proper workload mask.
>>
>> Hope that gives the picture.
>>
>
>
> Thanks, my understanding is also similar, referring to the core power
> switch profile function here:
> amd_powerplay.c::pp_dpm_switch_power_profile()
> *snip code*
> hwmgr->workload_mask |= (1 << hwmgr->workload_prority[type]);
> index = fls(hwmgr->workload_mask);
> index = index <= Workload_Policy_Max ? index - 1 : 0;
> workload = hwmgr->workload_setting[index];
> *snip_code*
> hwmgr->hwmgr_func->set_power_profile_mode(hwmgr, &workload, 0);
>
> Here I can see that the new workload mask is appended into the existing
> workload mask (not overwritten). So if we keep sending new
> workload_modes, they would be appended into the workload flags and
> finally the PM will pick the most aggressive one of all these flags, as
> per its policy.
>
Actually it's misleading -
The path for sienna is -
set_power_profile_mode -> sienna_cichlid_set_power_profile_mode
This code here is a picking one based on lookup table.
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD,
smu->power_profile_mode);
This is that lookup table -
static struct cmn2asic_mapping
sienna_cichlid_workload_map[PP_SMC_POWER_PROFILE_COUNT] = {
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT,
WORKLOAD_PPLIB_DEFAULT_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_FULLSCREEN3D,
WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_POWERSAVING,
WORKLOAD_PPLIB_POWER_SAVING_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VIDEO,
WORKLOAD_PPLIB_VIDEO_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VR,
WORKLOAD_PPLIB_VR_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE,
WORKLOAD_PPLIB_COMPUTE_BIT),
WORKLOAD_MAP(PP_SMC_POWER_PROFILE_CUSTOM,
WORKLOAD_PPLIB_CUSTOM_BIT),
};
And this is the place of interaction with PMFW. (1 << workload_type) is
the mask being sent.
smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
1 << workload_type, NULL);
In the end, driver implementation expects only one bit to be set.
Thanks,
Lijo
> Now, when we have a single workload:
> -> Job1: requests profile P1 via UAPI, ref count = 1
> -> driver sends flags for p1
> -> PM FW applies profile P1
> -> Job executes in profile P1
> -> Job goes to reset function, ref_count = 0,
> -> Power profile resets
>
> Now, we have conflicts only when we see multiple workloads (Job1 and Job 2)
> -> Job1: requests profile P1 via UAPI, ref count = 1
> -> driver sends flags for p1
> -> PM FW applies profile P1
> -> Job executes in profile P1
> -> Job2: requests profile P2 via UAPI, refcount = 2
> -> driver sends flags for (P1|P2)
> -> PM FW picks the more aggressive of the two (Say P1, stays in P1)
> -> Job1 goes to reset function, ref_count = 1, job1 does not reset power
> profile
> -> Job2 goes to reset function, ref_counter = 2, job 2 resets Power profile
> -> Power profile resets to None
>
> So this state machine looks like if there is only 1 job, it will be
> executed in desired mode. But if there are multiple, the most aggressive
> profile will be picked, and every job will be executed in atleast the
> requested power profile mode or higher.
>
> Do you find any problem so far ?
>
> - Shashank
>
>
>> Thanks,
>> Lijo
next prev parent reply other threads:[~2022-09-29 14:14 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-26 21:40 [PATCH v3 0/5] GPU workload hints for better performance Shashank Sharma
2022-09-26 21:40 ` [PATCH v3 1/5] drm/amdgpu: add UAPI for workload hints to ctx ioctl Shashank Sharma
2022-09-27 6:07 ` Christian König
2022-09-27 14:28 ` Felix Kuehling
2023-03-21 3:05 ` Marek Olšák
2023-03-21 13:00 ` Sharma, Shashank
2023-03-21 13:54 ` Christian König
2023-03-22 14:05 ` Marek Olšák
2023-03-22 14:08 ` Christian König
2023-03-22 14:24 ` Marek Olšák
2023-03-22 14:29 ` Christian König
2023-03-22 14:36 ` Marek Olšák
2023-03-22 14:52 ` Alex Deucher
2023-03-22 15:11 ` Marek Olšák
2023-03-22 14:38 ` Sharma, Shashank
2022-09-26 21:40 ` [PATCH v3 2/5] drm/amdgpu: add new functions to set GPU power profile Shashank Sharma
2022-09-27 2:14 ` Quan, Evan
2022-09-27 7:29 ` Sharma, Shashank
2022-09-27 9:29 ` Quan, Evan
2022-09-27 10:00 ` Sharma, Shashank
2022-09-27 6:08 ` Christian König
2022-09-27 9:58 ` Lazar, Lijo
2022-09-27 11:41 ` Sharma, Shashank
2022-09-27 12:10 ` Lazar, Lijo
2022-09-27 12:23 ` Sharma, Shashank
2022-09-27 12:39 ` Lazar, Lijo
2022-09-27 12:53 ` Sharma, Shashank
2022-09-27 13:29 ` Lazar, Lijo
2022-09-27 13:47 ` Sharma, Shashank
2022-09-27 14:00 ` Lazar, Lijo
2022-09-27 14:20 ` Sharma, Shashank
2022-09-27 14:34 ` Lazar, Lijo
2022-09-27 14:50 ` Sharma, Shashank
2022-09-27 15:20 ` Felix Kuehling
2022-09-26 21:40 ` [PATCH v3 3/5] drm/amdgpu: set GPU workload via ctx IOCTL Shashank Sharma
2022-09-27 6:09 ` Christian König
2022-09-26 21:40 ` [PATCH v3 4/5] drm/amdgpu: switch GPU workload profile Shashank Sharma
2022-09-27 6:11 ` Christian König
2022-09-27 10:03 ` Lazar, Lijo
2022-09-27 11:47 ` Sharma, Shashank
2022-09-27 12:20 ` Lazar, Lijo
2022-09-27 12:25 ` Sharma, Shashank
2022-09-27 16:33 ` Michel Dänzer
2022-09-27 17:06 ` Sharma, Shashank
2022-09-27 17:29 ` Michel Dänzer
2022-09-26 21:40 ` [PATCH v3 5/5] drm/amdgpu: switch workload context to/from compute Shashank Sharma
2022-09-27 6:12 ` Christian König
2022-09-27 14:48 ` Felix Kuehling
2022-09-27 14:58 ` Sharma, Shashank
2022-09-27 15:23 ` Felix Kuehling
2022-09-27 15:38 ` Sharma, Shashank
2022-09-27 20:40 ` Alex Deucher
2022-09-28 7:05 ` Lazar, Lijo
2022-09-28 8:56 ` Sharma, Shashank
2022-09-28 9:00 ` Sharma, Shashank
2022-09-28 21:51 ` Alex Deucher
2022-09-29 8:48 ` Sharma, Shashank
2022-09-29 11:10 ` Lazar, Lijo
2022-09-29 13:20 ` Sharma, Shashank
2022-09-29 13:37 ` Lazar, Lijo
2022-09-29 14:00 ` Sharma, Shashank
2022-09-29 14:14 ` Lazar, Lijo [this message]
2022-09-29 14:40 ` Sharma, Shashank
2022-09-29 18:32 ` Alex Deucher
2022-09-30 5:08 ` Lazar, Lijo
2022-09-30 8:37 ` Sharma, Shashank
2022-09-30 9:13 ` Lazar, Lijo
2022-09-30 9:22 ` Sharma, Shashank
2022-09-30 9:54 ` Lazar, Lijo
2022-09-30 10:09 ` Sharma, Shashank
2022-09-29 18:07 ` Felix Kuehling
2022-09-30 4:46 ` Lazar, Lijo
2022-09-27 16:24 ` [PATCH v3 0/5] GPU workload hints for better performance Michel Dänzer
2022-09-27 16:59 ` Sharma, Shashank
2022-09-27 17:13 ` Michel Dänzer
2022-09-27 17:25 ` Sharma, Shashank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46b27eb5-1807-044f-111a-bb67622e7fd6@amd.com \
--to=lijo.lazar@amd.com \
--cc=alexander.deucher@amd.com \
--cc=alexdeucher@gmail.com \
--cc=amaranath.somalapuram@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=felix.kuehling@amd.com \
--cc=shashank.sharma@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).