amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Quan, Evan" <Evan.Quan@amd.com>
To: "Lazar, Lijo" <Lijo.Lazar@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>
Cc: "Deucher, Alexander" <Alexander.Deucher@amd.com>,
	"pmenzel@molgen.mpg.de" <pmenzel@molgen.mpg.de>,
	"arthur.marsh@internode.on.net" <arthur.marsh@internode.on.net>
Subject: RE: [PATCH V2] drm/amd/pm: fix the deadlock issue observed on SI
Date: Fri, 22 Apr 2022 08:53:14 +0000	[thread overview]
Message-ID: <DM6PR12MB2619F866BE71603B28F339C3E4F79@DM6PR12MB2619.namprd12.prod.outlook.com> (raw)
In-Reply-To: <e3adf823-670c-0067-d74b-0105f42fa85e@amd.com>

[AMD Official Use Only]



> -----Original Message-----
> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> Sent: Friday, April 22, 2022 4:36 PM
> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>;
> pmenzel@molgen.mpg.de; arthur.marsh@internode.on.net
> Subject: Re: [PATCH V2] drm/amd/pm: fix the deadlock issue observed on SI
> 
> 
> 
> On 4/22/2022 1:57 PM, Quan, Evan wrote:
> > [AMD Official Use Only]
> >
> >
> >
> >> -----Original Message-----
> >> From: Lazar, Lijo <Lijo.Lazar@amd.com>
> >> Sent: Monday, April 11, 2022 6:06 PM
> >> To: Quan, Evan <Evan.Quan@amd.com>; amd-gfx@lists.freedesktop.org
> >> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>;
> >> pmenzel@molgen.mpg.de; arthur.marsh@internode.on.net
> >> Subject: Re: [PATCH V2] drm/amd/pm: fix the deadlock issue observed
> >> on SI
> >>
> >>
> >>
> >> On 4/11/2022 2:24 PM, Evan Quan wrote:
> >>> The adev->pm.mutx is already held at the beginning of
> >>>
> >>
> amdgpu_dpm_compute_clocks/amdgpu_dpm_enable_uvd/amdgpu_dpm_
> >> enable_vce.
> >>> But on their calling path, amdgpu_display_bandwidth_update will be
> >>> called and thus its sub functions amdgpu_dpm_get_sclk/mclk. They
> >>> will then try to acquire the same adev->pm.mutex and deadlock will
> occur.
> >>>
> >>
> >> What about using locked versions of get_sclk/mclk and leave the rest
> >> as they are?
> > [Quan, Evan] Yeah, that(adding two new APIs for locked versions of
> get_sclk/mclk) should also work. But considering there cannot be other ASICs
> who need them, they are kind of redundant.
> > Meanwhile my version is just to get everything reverted back to its original.
> So, it should be quite safe. How do you think?
> >
> 
> As far as I see -
> 
> 1) It moves display watermark updates outside of locking
> 2) It adds bandwidth update under common compute_clocks paths.
[Quan, Evan] I suppose what you mentioned is about "amdgpu_display_bandwidth_update". 
No, I just double confirmed(the old code in our amd-staging-dkms-5.9/5.10 branch) and it did come with no lock protection.
For those legacy ASICs, the watermark setting was performed by programming registers directly(handled at display side).
That might explain why it did not have lock protection as recent ASICs.

BR
Evan
> 
> I guess it is not the same as pre-mutex change.
> 
> Thanks,
> Lijo
> 
> > BR
> > Evan
> >>
> >> Thanks,
> >> Lijo
> >>
> >>> By placing amdgpu_display_bandwidth_update outside of adev-
> pm.mutex
> >>> protection(considering logically they do not need such protection)
> >>> and restructuring the call flow accordingly, we can eliminate the
> >>> deadlock issue. This comes with no real logics change.
> >>>
> >>> Fixes: 3712e7a49459 ("drm/amd/pm: unified lock protections in
> >>> amdgpu_dpm.c")
> >>> Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
> >>> Reported-by: Arthur Marsh <arthur.marsh@internode.on.net>
> >>> Link: https:
> >>> //lore.kernel.org/all/9e689fea-6c69-f4b0-8dee-
> 32c4cf7d8f9c@molgen.mpg.
> >>> de/
> >>> BugLink: https: //gitlab.freedesktop.org/drm/amd/-/issues/1957
> >>> Signed-off-by: Evan Quan <evan.quan@amd.com>
> >>> --
> >>> v1->v2:
> >>>     - rich the commit messages(Paul)
> >>> ---
> >>>    drivers/gpu/drm/amd/pm/amdgpu_dpm.c           | 39
> >> +++++++++++++++++++
> >>>    .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c    | 10 -----
> >>>    drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c    | 35 -----------------
> >>>    .../gpu/drm/amd/pm/powerplay/amd_powerplay.c  | 10 -----
> >>>    4 files changed, 39 insertions(+), 55 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> index 5504d81c77b7..72e7b5d40af6 100644
> >>> --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
> >>> @@ -427,6 +427,7 @@ int amdgpu_dpm_read_sensor(struct
> >> amdgpu_device *adev, enum amd_pp_sensors senso
> >>>    void amdgpu_dpm_compute_clocks(struct amdgpu_device *adev)
> >>>    {
> >>>    	const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;
> >>> +	int i;
> >>>
> >>>    	if (!adev->pm.dpm_enabled)
> >>>    		return;
> >>> @@ -434,6 +435,15 @@ void amdgpu_dpm_compute_clocks(struct
> >> amdgpu_device *adev)
> >>>    	if (!pp_funcs->pm_compute_clocks)
> >>>    		return;
> >>>
> >>> +	if (adev->mode_info.num_crtc)
> >>> +		amdgpu_display_bandwidth_update(adev);
> >>> +
> >>> +	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
> >>> +		struct amdgpu_ring *ring = adev->rings[i];
> >>> +		if (ring && ring->sched.ready)
> >>> +			amdgpu_fence_wait_empty(ring);
> >>> +	}
> >>> +
> >>>    	mutex_lock(&adev->pm.mutex);
> >>>    	pp_funcs->pm_compute_clocks(adev->powerplay.pp_handle);
> >>>    	mutex_unlock(&adev->pm.mutex);
> >>> @@ -443,6 +453,20 @@ void amdgpu_dpm_enable_uvd(struct
> >> amdgpu_device *adev, bool enable)
> >>>    {
> >>>    	int ret = 0;
> >>>
> >>> +	if (adev->family == AMDGPU_FAMILY_SI) {
> >>> +		mutex_lock(&adev->pm.mutex);
> >>> +		if (enable) {
> >>> +			adev->pm.dpm.uvd_active = true;
> >>> +			adev->pm.dpm.state =
> >> POWER_STATE_TYPE_INTERNAL_UVD;
> >>> +		} else {
> >>> +			adev->pm.dpm.uvd_active = false;
> >>> +		}
> >>> +		mutex_unlock(&adev->pm.mutex);
> >>> +
> >>> +		amdgpu_dpm_compute_clocks(adev);
> >>> +		return;
> >>> +	}
> >>> +
> >>>    	ret = amdgpu_dpm_set_powergating_by_smu(adev,
> >> AMD_IP_BLOCK_TYPE_UVD, !enable);
> >>>    	if (ret)
> >>>    		DRM_ERROR("Dpm %s uvd failed, ret = %d. \n", @@ -453,6
> >> +477,21 @@
> >>> void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool
> enable)
> >>>    {
> >>>    	int ret = 0;
> >>>
> >>> +	if (adev->family == AMDGPU_FAMILY_SI) {
> >>> +		mutex_lock(&adev->pm.mutex);
> >>> +		if (enable) {
> >>> +			adev->pm.dpm.vce_active = true;
> >>> +			/* XXX select vce level based on ring/task */
> >>> +			adev->pm.dpm.vce_level =
> >> AMD_VCE_LEVEL_AC_ALL;
> >>> +		} else {
> >>> +			adev->pm.dpm.vce_active = false;
> >>> +		}
> >>> +		mutex_unlock(&adev->pm.mutex);
> >>> +
> >>> +		amdgpu_dpm_compute_clocks(adev);
> >>> +		return;
> >>> +	}
> >>> +
> >>>    	ret = amdgpu_dpm_set_powergating_by_smu(adev,
> >> AMD_IP_BLOCK_TYPE_VCE, !enable);
> >>>    	if (ret)
> >>>    		DRM_ERROR("Dpm %s vce failed, ret = %d. \n", diff --git
> >>> a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> >>> b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> >>> index 9613c6181c17..d3fe149d8476 100644
> >>> --- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
> >>> @@ -1028,16 +1028,6 @@ static int
> >> amdgpu_dpm_change_power_state_locked(struct amdgpu_device
> *adev)
> >>>    void amdgpu_legacy_dpm_compute_clocks(void *handle)
> >>>    {
> >>>    	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -	int i = 0;
> >>> -
> >>> -	if (adev->mode_info.num_crtc)
> >>> -		amdgpu_display_bandwidth_update(adev);
> >>> -
> >>> -	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
> >>> -		struct amdgpu_ring *ring = adev->rings[i];
> >>> -		if (ring && ring->sched.ready)
> >>> -			amdgpu_fence_wait_empty(ring);
> >>> -	}
> >>>
> >>>    	amdgpu_dpm_get_active_displays(adev);
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> >>> b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> >>> index caae54487f9c..633dab14f51c 100644
> >>> --- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> >>> +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
> >>> @@ -3892,40 +3892,6 @@ static int si_set_boot_state(struct
> >> amdgpu_device *adev)
> >>>    }
> >>>    #endif
> >>>
> >>> -static int si_set_powergating_by_smu(void *handle,
> >>> -				     uint32_t block_type,
> >>> -				     bool gate)
> >>> -{
> >>> -	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> >>> -
> >>> -	switch (block_type) {
> >>> -	case AMD_IP_BLOCK_TYPE_UVD:
> >>> -		if (!gate) {
> >>> -			adev->pm.dpm.uvd_active = true;
> >>> -			adev->pm.dpm.state =
> >> POWER_STATE_TYPE_INTERNAL_UVD;
> >>> -		} else {
> >>> -			adev->pm.dpm.uvd_active = false;
> >>> -		}
> >>> -
> >>> -		amdgpu_legacy_dpm_compute_clocks(handle);
> >>> -		break;
> >>> -	case AMD_IP_BLOCK_TYPE_VCE:
> >>> -		if (!gate) {
> >>> -			adev->pm.dpm.vce_active = true;
> >>> -			/* XXX select vce level based on ring/task */
> >>> -			adev->pm.dpm.vce_level =
> >> AMD_VCE_LEVEL_AC_ALL;
> >>> -		} else {
> >>> -			adev->pm.dpm.vce_active = false;
> >>> -		}
> >>> -
> >>> -		amdgpu_legacy_dpm_compute_clocks(handle);
> >>> -		break;
> >>> -	default:
> >>> -		break;
> >>> -	}
> >>> -	return 0;
> >>> -}
> >>> -
> >>>    static int si_set_sw_state(struct amdgpu_device *adev)
> >>>    {
> >>>    	return (amdgpu_si_send_msg_to_smc(adev,
> >> PPSMC_MSG_SwitchToSwState) == PPSMC_Result_OK) ?
> >>> @@ -8125,7 +8091,6 @@ static const struct amd_pm_funcs
> si_dpm_funcs
> >> = {
> >>>    	.print_power_state = &si_dpm_print_power_state,
> >>>    	.debugfs_print_current_performance_level =
> >> &si_dpm_debugfs_print_current_performance_level,
> >>>    	.force_performance_level = &si_dpm_force_performance_level,
> >>> -	.set_powergating_by_smu = &si_set_powergating_by_smu,
> >>>    	.vblank_too_short = &si_dpm_vblank_too_short,
> >>>    	.set_fan_control_mode = &si_dpm_set_fan_control_mode,
> >>>    	.get_fan_control_mode = &si_dpm_get_fan_control_mode, diff --
> >> git
> >>> a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
> >>> b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
> >>> index dbed72c1e0c6..1eb4e613b27a 100644
> >>> --- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
> >>> +++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
> >>> @@ -1503,16 +1503,6 @@ static void pp_pm_compute_clocks(void
> >> *handle)
> >>>    {
> >>>    	struct pp_hwmgr *hwmgr = handle;
> >>>    	struct amdgpu_device *adev = hwmgr->adev;
> >>> -	int i = 0;
> >>> -
> >>> -	if (adev->mode_info.num_crtc)
> >>> -		amdgpu_display_bandwidth_update(adev);
> >>> -
> >>> -	for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
> >>> -		struct amdgpu_ring *ring = adev->rings[i];
> >>> -		if (ring && ring->sched.ready)
> >>> -			amdgpu_fence_wait_empty(ring);
> >>> -	}
> >>>
> >>>    	if (!amdgpu_device_has_dc_support(adev)) {
> >>>    		amdgpu_dpm_get_active_displays(adev);
> >>>

  reply	other threads:[~2022-04-22  8:53 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-11  8:54 [PATCH V2] drm/amd/pm: fix the deadlock issue observed on SI Evan Quan
2022-04-11 10:06 ` Lazar, Lijo
2022-04-22  8:27   ` Quan, Evan
2022-04-22  8:36     ` Lazar, Lijo
2022-04-22  8:53       ` Quan, Evan [this message]
2022-04-22  9:25         ` Lazar, Lijo
2022-04-22  9:43           ` Quan, Evan
2022-04-21  8:29 ` Thorsten Leemhuis
2022-04-22  8:16   ` Quan, Evan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR12MB2619F866BE71603B28F339C3E4F79@DM6PR12MB2619.namprd12.prod.outlook.com \
    --to=evan.quan@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Lijo.Lazar@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=arthur.marsh@internode.on.net \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).