* [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 @ 2022-01-05 17:06 Mario Limonciello 2022-01-05 21:26 ` Harry Wentland 2022-01-06 11:38 ` Thorsten Leemhuis 0 siblings, 2 replies; 5+ messages in thread From: Mario Limonciello @ 2022-01-05 17:06 UTC (permalink / raw) To: amd-gfx Cc: Qingqing Zhuo, Scott Bruce, Mario Limonciello, Chris Hixon, spasswolf The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") causes a regression in s0ix where the system will fail to resume properly. This may be because an HPD was active the last time clocks were updated but clocks didn't get updated again during s0ix. So add an extra call to update clocks as part of the suspend routine: dm_suspend->dc_set_power_state->clk_mgr_optimize_pwr_state In case HPD is set during this time, also check if the call happened during suspend to allow overriding the WA. Cc: Qingqing Zhuo <qingqing.zhuo@amd.com> Reported-by: Scott Bruce <smbruce@gmail.com> Reported-by: Chris Hixon <linux-kernel-bugs@hixontech.com> Reported-by: spasswolf@web.de BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215436 BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1821 BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1852 Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to dcn20") Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> --- changes from v1->v2: * Add fallthrough statement * Extend case to check if call was explicitly in s0ix since #1852 showed hpd_state can be set at this time too * Adjust commit message and title * Add extra commit and bug fixed to metadata drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 5 ++++- drivers/gpu/drm/amd/display/dc/core/dc.c | 3 +++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c index fbda42313bfe..fa5efe10b779 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c @@ -23,6 +23,8 @@ * */ +#include "amdgpu.h" + #include "dccg.h" #include "clk_mgr_internal.h" @@ -126,6 +128,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, bool safe_to_lower) { struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); + struct amdgpu_device *adev = clk_mgr_base->ctx->driver_context; struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; struct dc *dc = clk_mgr_base->ctx->dc; int display_count; @@ -157,7 +160,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, } /* if we can go lower, go lower */ - if (display_count == 0 && !hpd_state) { + if (display_count == 0 && (adev->in_s0ix || !hpd_state)) { rn_vbios_smu_set_dcn_low_power_state(clk_mgr, DCN_PWR_STATE_LOW_POWER); /* update power state */ clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index 91c4874473d6..8e0c820f6892 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -3299,6 +3299,9 @@ void dc_set_power_state( } break; + case DC_ACPI_CM_POWER_STATE_D3: + clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); + fallthrough; default: ASSERT(dc->current_state->stream_count == 0); /* Zero out the current context so that on resume we start with -- 2.25.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 2022-01-05 17:06 [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 Mario Limonciello @ 2022-01-05 21:26 ` Harry Wentland 2022-01-05 21:39 ` Limonciello, Mario 2022-01-06 11:38 ` Thorsten Leemhuis 1 sibling, 1 reply; 5+ messages in thread From: Harry Wentland @ 2022-01-05 21:26 UTC (permalink / raw) To: Mario Limonciello, amd-gfx Cc: Qingqing Zhuo, Scott Bruce, spasswolf, Chris Hixon On 2022-01-05 12:06, Mario Limonciello wrote: > The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for > hard hang on HPD") causes a regression in s0ix where the system will > fail to resume properly. This may be because an HPD was active the last > time clocks were updated but clocks didn't get updated again during s0ix. > > So add an extra call to update clocks as part of the suspend routine: > dm_suspend->dc_set_power_state->clk_mgr_optimize_pwr_state > > In case HPD is set during this time, also check if the call happened during > suspend to allow overriding the WA. > > Cc: Qingqing Zhuo <qingqing.zhuo@amd.com> > Reported-by: Scott Bruce <smbruce@gmail.com> > Reported-by: Chris Hixon <linux-kernel-bugs@hixontech.com> > Reported-by: spasswolf@web.de > BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215436 > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1821 > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1852 > Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") > Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to dcn20") > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > --- > changes from v1->v2: > * Add fallthrough statement > * Extend case to check if call was explicitly in s0ix since #1852 showed hpd_state > can be set at this time too > * Adjust commit message and title > * Add extra commit and bug fixed to metadata > drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 5 ++++- > drivers/gpu/drm/amd/display/dc/core/dc.c | 3 +++ > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > index fbda42313bfe..fa5efe10b779 100644 > --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > @@ -23,6 +23,8 @@ > * > */ > > +#include "amdgpu.h" > + > #include "dccg.h" > #include "clk_mgr_internal.h" > > @@ -126,6 +128,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, > bool safe_to_lower) > { > struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); > + struct amdgpu_device *adev = clk_mgr_base->ctx->driver_context; DC code is shared on other OSes. driver_context won't be an instance of amdgpu_device in those cases. If we need adev->in_s0ix we need to find a way to plumb it through DC structs and interfaces. Harry > struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; > struct dc *dc = clk_mgr_base->ctx->dc; > int display_count; > @@ -157,7 +160,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, > } > > /* if we can go lower, go lower */ > - if (display_count == 0 && !hpd_state) { > + if (display_count == 0 && (adev->in_s0ix || !hpd_state)) { > rn_vbios_smu_set_dcn_low_power_state(clk_mgr, DCN_PWR_STATE_LOW_POWER); > /* update power state */ > clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER; > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c > index 91c4874473d6..8e0c820f6892 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c > @@ -3299,6 +3299,9 @@ void dc_set_power_state( > } > > break; > + case DC_ACPI_CM_POWER_STATE_D3: > + clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); > + fallthrough; > default: > ASSERT(dc->current_state->stream_count == 0); > /* Zero out the current context so that on resume we start with ^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 2022-01-05 21:26 ` Harry Wentland @ 2022-01-05 21:39 ` Limonciello, Mario 0 siblings, 0 replies; 5+ messages in thread From: Limonciello, Mario @ 2022-01-05 21:39 UTC (permalink / raw) To: Wentland, Harry, amd-gfx, Zhuo, Qingqing (Lillian) Cc: Scott Bruce, spasswolf, Chris Hixon [Public] > -----Original Message----- > From: Wentland, Harry <Harry.Wentland@amd.com> > Sent: Wednesday, January 5, 2022 15:26 > To: Limonciello, Mario <Mario.Limonciello@amd.com>; amd- > gfx@lists.freedesktop.org > Cc: Zhuo, Qingqing (Lillian) <Qingqing.Zhuo@amd.com>; Scott Bruce > <smbruce@gmail.com>; Chris Hixon <linux-kernel-bugs@hixontech.com>; > spasswolf@web.de > Subject: Re: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set > to D3 in s0i3 > > On 2022-01-05 12:06, Mario Limonciello wrote: > > The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for > > hard hang on HPD") causes a regression in s0ix where the system will > > fail to resume properly. This may be because an HPD was active the last > > time clocks were updated but clocks didn't get updated again during s0ix. > > > > So add an extra call to update clocks as part of the suspend routine: > > dm_suspend->dc_set_power_state->clk_mgr_optimize_pwr_state > > > > In case HPD is set during this time, also check if the call happened during > > suspend to allow overriding the WA. > > > > Cc: Qingqing Zhuo <qingqing.zhuo@amd.com> > > Reported-by: Scott Bruce <smbruce@gmail.com> > > Reported-by: Chris Hixon <linux-kernel-bugs@hixontech.com> > > Reported-by: spasswolf@web.de > > BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215436 > > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1821 > > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1852 > > Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") > > Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to > dcn20") > > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > > --- > > changes from v1->v2: > > * Add fallthrough statement > > * Extend case to check if call was explicitly in s0ix since #1852 showed > hpd_state > > can be set at this time too > > * Adjust commit message and title > > * Add extra commit and bug fixed to metadata > > drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 5 ++++- > > drivers/gpu/drm/amd/display/dc/core/dc.c | 3 +++ > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > index fbda42313bfe..fa5efe10b779 100644 > > --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > @@ -23,6 +23,8 @@ > > * > > */ > > > > +#include "amdgpu.h" > > + > > #include "dccg.h" > > #include "clk_mgr_internal.h" > > > > @@ -126,6 +128,7 @@ static void rn_update_clocks(struct clk_mgr > *clk_mgr_base, > > bool safe_to_lower) > > { > > struct clk_mgr_internal *clk_mgr = > TO_CLK_MGR_INTERNAL(clk_mgr_base); > > + struct amdgpu_device *adev = clk_mgr_base->ctx->driver_context; > > DC code is shared on other OSes. driver_context won't be an instance > of amdgpu_device in those cases. > > If we need adev->in_s0ix we need to find a way to plumb it through > DC structs and interfaces. > OK. I'll see what that takes to do. I was hoping the original version would have sufficed but Chris (on CC) had reported that hpd_state was still active When this was called again during the suspend context. <snip> Jan 05 15:39:08.562580 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 05 15:43:54.575923 kernel: wlo1: deauthenticating from ZZ:ZZ:ZZ:ZZ:ZZ:ZZ by local choice (Reason: 3=DEAUTH_LEAVING) Jan 05 15:43:54.699223 kernel: amdgpu 0000:04:00.0: amdgpu: [rn_update_clocks] display count: 0, in_s0ix: 0, hpd_state: 0 Jan 05 15:43:55.922674 kernel: amdgpu 0000:04:00.0: amdgpu: [rn_update_clocks] display count: 1, in_s0ix: 0, hpd_state: 1 Jan 05 15:43:55.979221 kernel: rfkill: input handler enabled Jan 05 15:43:57.712689 kernel: PM: suspend entry (s2idle) Jan 05 15:44:07.302069 kernel: Filesystems sync: 0.008 seconds Jan 05 15:44:07.302256 kernel: Freezing user space processes ... (elapsed 0.001 seconds) done. Jan 05 15:44:07.302306 kernel: OOM killer disabled. Jan 05 15:44:07.302344 kernel: Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done. Jan 05 15:44:07.302378 kernel: printk: Suspending console(s) (use no_console_suspend to debug) Jan 05 15:44:07.302416 kernel: amdgpu 0000:04:00.0: amdgpu: [rn_update_clocks] display count: 0, in_s0ix: 1, hpd_state: 1 </snip> So I suppose the other question to ask though - why is that hpd interrupt still active? Shouldn't it have been suspended by then? @Zhuo, Qingqing (Lillian) Maybe some other code is needed to suspend it on the way down? It was for spasswolf@web.de (which is why I had the simpler version of the patch for v1). > Harry > > > struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; > > struct dc *dc = clk_mgr_base->ctx->dc; > > int display_count; > > @@ -157,7 +160,7 @@ static void rn_update_clocks(struct clk_mgr > *clk_mgr_base, > > } > > > > /* if we can go lower, go lower */ > > - if (display_count == 0 && !hpd_state) { > > + if (display_count == 0 && (adev->in_s0ix || !hpd_state)) > { > > > rn_vbios_smu_set_dcn_low_power_state(clk_mgr, > DCN_PWR_STATE_LOW_POWER); > > /* update power state */ > > clk_mgr_base->clks.pwr_state = > DCN_PWR_STATE_LOW_POWER; > > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c > b/drivers/gpu/drm/amd/display/dc/core/dc.c > > index 91c4874473d6..8e0c820f6892 100644 > > --- a/drivers/gpu/drm/amd/display/dc/core/dc.c > > +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c > > @@ -3299,6 +3299,9 @@ void dc_set_power_state( > > } > > > > break; > > + case DC_ACPI_CM_POWER_STATE_D3: > > + clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); > > + fallthrough; > > default: > > ASSERT(dc->current_state->stream_count == 0); > > /* Zero out the current context so that on resume we start with ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 2022-01-05 17:06 [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 Mario Limonciello 2022-01-05 21:26 ` Harry Wentland @ 2022-01-06 11:38 ` Thorsten Leemhuis 2022-01-06 13:46 ` Limonciello, Mario 1 sibling, 1 reply; 5+ messages in thread From: Thorsten Leemhuis @ 2022-01-06 11:38 UTC (permalink / raw) To: Mario Limonciello, amd-gfx Cc: Qingqing Zhuo, Scott Bruce, spasswolf, Chris Hixon Hi, this is your Linux kernel regression tracker speaking. On 05.01.22 18:06, Mario Limonciello wrote: > The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for > hard hang on HPD") causes a regression in s0ix where the system will > fail to resume properly. This may be because an HPD was active the last > time clocks were updated but clocks didn't get updated again during s0ix. > > So add an extra call to update clocks as part of the suspend routine: > dm_suspend->dc_set_power_state->clk_mgr_optimize_pwr_state > > In case HPD is set during this time, also check if the call happened during > suspend to allow overriding the WA. Thx for this. Our of interest: is that something that should still go into v5.16? > Cc: Qingqing Zhuo <qingqing.zhuo@amd.com> > Reported-by: Scott Bruce <smbruce@gmail.com> > Reported-by: Chris Hixon <linux-kernel-bugs@hixontech.com> > Reported-by: spasswolf@web.de > BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215436 > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1821 > BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1852 What is "BugLink"? It's not mention in the kernel's documentation, which tells people to use "Link:" for linking to bug reports: https://www.kernel.org/doc/html/latest/process/submitting-patches.html That is not new, bug recently was made more explicit. Hence, please consider changing them to "Link:", there are tools out there that repy on them (I'm running such a tool, but there might be others out there we are not aware of). FWIW, in principe I agree that we need a seperate tag for this. But then we should get this discussed and blessed through the usual channels. That's why I recently proposed "Reported:", without much success so far: https://lore.kernel.org/lkml/c6724c7fb534a46e095e6aef13d996ed9589e578.1639042966.git.linux@leemhuis.info/ Ciao, Thorsten > Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") > Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to dcn20") > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > --- > changes from v1->v2: > * Add fallthrough statement > * Extend case to check if call was explicitly in s0ix since #1852 showed hpd_state > can be set at this time too > * Adjust commit message and title > * Add extra commit and bug fixed to metadata > drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 5 ++++- > drivers/gpu/drm/amd/display/dc/core/dc.c | 3 +++ > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > index fbda42313bfe..fa5efe10b779 100644 > --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > @@ -23,6 +23,8 @@ > * > */ > > +#include "amdgpu.h" > + > #include "dccg.h" > #include "clk_mgr_internal.h" > > @@ -126,6 +128,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, > bool safe_to_lower) > { > struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base); > + struct amdgpu_device *adev = clk_mgr_base->ctx->driver_context; > struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; > struct dc *dc = clk_mgr_base->ctx->dc; > int display_count; > @@ -157,7 +160,7 @@ static void rn_update_clocks(struct clk_mgr *clk_mgr_base, > } > > /* if we can go lower, go lower */ > - if (display_count == 0 && !hpd_state) { > + if (display_count == 0 && (adev->in_s0ix || !hpd_state)) { > rn_vbios_smu_set_dcn_low_power_state(clk_mgr, DCN_PWR_STATE_LOW_POWER); > /* update power state */ > clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER; > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c > index 91c4874473d6..8e0c820f6892 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c > @@ -3299,6 +3299,9 @@ void dc_set_power_state( > } > > break; > + case DC_ACPI_CM_POWER_STATE_D3: > + clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); > + fallthrough; > default: > ASSERT(dc->current_state->stream_count == 0); > /* Zero out the current context so that on resume we start with ^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 2022-01-06 11:38 ` Thorsten Leemhuis @ 2022-01-06 13:46 ` Limonciello, Mario 0 siblings, 0 replies; 5+ messages in thread From: Limonciello, Mario @ 2022-01-06 13:46 UTC (permalink / raw) To: Thorsten Leemhuis, amd-gfx Cc: Zhuo, Qingqing (Lillian), Scott Bruce, spasswolf, Chris Hixon [Public] > -----Original Message----- > From: Thorsten Leemhuis <regressions@leemhuis.info> > Sent: Thursday, January 6, 2022 05:39 > To: Limonciello, Mario <Mario.Limonciello@amd.com>; amd- > gfx@lists.freedesktop.org > Cc: Zhuo, Qingqing (Lillian) <Qingqing.Zhuo@amd.com>; Scott Bruce > <smbruce@gmail.com>; Chris Hixon <linux-kernel-bugs@hixontech.com>; > spasswolf@web.de > Subject: Re: [PATCH v2] drm/amd/display: explicitly update clocks when DC is set > to D3 in s0i3 > > Hi, this is your Linux kernel regression tracker speaking. > > On 05.01.22 18:06, Mario Limonciello wrote: > > The WA from commit 5965280abd30 ("drm/amd/display: Apply w/a for > > hard hang on HPD") causes a regression in s0ix where the system will > > fail to resume properly. This may be because an HPD was active the last > > time clocks were updated but clocks didn't get updated again during s0ix. > > > > So add an extra call to update clocks as part of the suspend routine: > > dm_suspend->dc_set_power_state->clk_mgr_optimize_pwr_state > > > > In case HPD is set during this time, also check if the call happened during > > suspend to allow overriding the WA. > > Thx for this. Our of interest: is that something that should still go > into v5.16? Not in v2. There were review comments that led to creating a V3. Once V3 is reviewed if there is time remaining, yes. If not, then it will be a great candidate for 5.16.1. > > > Cc: Qingqing Zhuo <qingqing.zhuo@amd.com> > > Reported-by: Scott Bruce <smbruce@gmail.com> > > Reported-by: Chris Hixon <linux-kernel-bugs@hixontech.com> > > Reported-by: spasswolf@web.de > > BugLink: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla. > kernel.org%2Fshow_bug.cgi%3Fid%3D215436&data=04%7C01%7Cmario.li > monciello%40amd.com%7C024b889dd8af440abbb008d9d1091f27%7C3dd8961f > e4884e608e11a82d994e183d%7C0%7C0%7C637770660416166334%7CUnknow > n%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haW > wiLCJXVCI6Mn0%3D%7C3000&sdata=rzNDgWba05cL5Z6CTvlblJS96R%2F1 > 8Vh7ku0S%2FQTRbHQ%3D&reserved=0 > > BugLink: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.fr > eedesktop.org%2Fdrm%2Famd%2F- > %2Fissues%2F1821&data=04%7C01%7Cmario.limonciello%40amd.com%7 > C024b889dd8af440abbb008d9d1091f27%7C3dd8961fe4884e608e11a82d994e1 > 83d%7C0%7C0%7C637770660416166334%7CUnknown%7CTWFpbGZsb3d8eyJ > WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C > 3000&sdata=TQW1vagoGW1DdNvsTVFKlA7gdJVvWhnxZU6BZPn3MH4%3D > &reserved=0 > > BugLink: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.fr > eedesktop.org%2Fdrm%2Famd%2F- > %2Fissues%2F1852&data=04%7C01%7Cmario.limonciello%40amd.com%7 > C024b889dd8af440abbb008d9d1091f27%7C3dd8961fe4884e608e11a82d994e1 > 83d%7C0%7C0%7C637770660416166334%7CUnknown%7CTWFpbGZsb3d8eyJ > WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C > 3000&sdata=N4C5PRnke4%2Bpswb3oAMhNsPq3AMLK5VuJG7Ht%2F1n0jk > %3D&reserved=0 > > What is "BugLink"? It's not mention in the kernel's documentation, which > tells people to use "Link:" for linking to bug reports: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ke > rnel.org%2Fdoc%2Fhtml%2Flatest%2Fprocess%2Fsubmitting- > patches.html&data=04%7C01%7Cmario.limonciello%40amd.com%7C024b > 889dd8af440abbb008d9d1091f27%7C3dd8961fe4884e608e11a82d994e183d%7 > C0%7C0%7C637770660416166334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC > 4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&a > mp;sdata=GntDRw3WuCUZ%2Fys9uDAltqDs2zsBR4qQm3KdDA3VBKo%3D& > reserved=0 > > That is not new, bug recently was made more explicit. > > Hence, please consider changing them to "Link:", there are tools out > there that repy on them (I'm running such a tool, but there might be > others out there we are not aware of). Thanks for enlightening me. If I need to spin for v4 I'll fix it up on next submission, otherwise I'll fix it up v3 manually before it goes into amd-staging-drm-next. > > FWIW, in principe I agree that we need a seperate tag for this. But then > we should get this discussed and blessed through the usual channels. > That's why I recently proposed "Reported:", without much success so far: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.ker > nel.org%2Flkml%2Fc6724c7fb534a46e095e6aef13d996ed9589e578.163904296 > 6.git.linux%40leemhuis.info%2F&data=04%7C01%7Cmario.limonciello%40 > amd.com%7C024b889dd8af440abbb008d9d1091f27%7C3dd8961fe4884e608e1 > 1a82d994e183d%7C0%7C0%7C637770660416166334%7CUnknown%7CTWFpb > GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6M > n0%3D%7C3000&sdata=oBGtPqXfzBcn%2FqUrH7hDQtMIXwIKBY6xh3Qqd0 > xkRjg%3D&reserved=0 At least some distributions kernel teams use BugLink (presumably from their own tools). > > Ciao, Thorsten > > > Fixes: 5965280abd30 ("drm/amd/display: Apply w/a for hard hang on HPD") > > Fixes: 1bd3bc745e7f ("drm/amd/display: Extend w/a for hard hang on HPD to > dcn20") > > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > > --- > > changes from v1->v2: > > * Add fallthrough statement > > * Extend case to check if call was explicitly in s0ix since #1852 showed > hpd_state > > can be set at this time too > > * Adjust commit message and title > > * Add extra commit and bug fixed to metadata > > drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 5 ++++- > > drivers/gpu/drm/amd/display/dc/core/dc.c | 3 +++ > > 2 files changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > index fbda42313bfe..fa5efe10b779 100644 > > --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c > > @@ -23,6 +23,8 @@ > > * > > */ > > > > +#include "amdgpu.h" > > + > > #include "dccg.h" > > #include "clk_mgr_internal.h" > > > > @@ -126,6 +128,7 @@ static void rn_update_clocks(struct clk_mgr > *clk_mgr_base, > > bool safe_to_lower) > > { > > struct clk_mgr_internal *clk_mgr = > TO_CLK_MGR_INTERNAL(clk_mgr_base); > > + struct amdgpu_device *adev = clk_mgr_base->ctx->driver_context; > > struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk; > > struct dc *dc = clk_mgr_base->ctx->dc; > > int display_count; > > @@ -157,7 +160,7 @@ static void rn_update_clocks(struct clk_mgr > *clk_mgr_base, > > } > > > > /* if we can go lower, go lower */ > > - if (display_count == 0 && !hpd_state) { > > + if (display_count == 0 && (adev->in_s0ix || !hpd_state)) > { > > > rn_vbios_smu_set_dcn_low_power_state(clk_mgr, > DCN_PWR_STATE_LOW_POWER); > > /* update power state */ > > clk_mgr_base->clks.pwr_state = > DCN_PWR_STATE_LOW_POWER; > > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c > b/drivers/gpu/drm/amd/display/dc/core/dc.c > > index 91c4874473d6..8e0c820f6892 100644 > > --- a/drivers/gpu/drm/amd/display/dc/core/dc.c > > +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c > > @@ -3299,6 +3299,9 @@ void dc_set_power_state( > > } > > > > break; > > + case DC_ACPI_CM_POWER_STATE_D3: > > + clk_mgr_optimize_pwr_state(dc, dc->clk_mgr); > > + fallthrough; > > default: > > ASSERT(dc->current_state->stream_count == 0); > > /* Zero out the current context so that on resume we start with ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-01-07 18:28 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-01-05 17:06 [PATCH v2] drm/amd/display: explicitly update clocks when DC is set to D3 in s0i3 Mario Limonciello 2022-01-05 21:26 ` Harry Wentland 2022-01-05 21:39 ` Limonciello, Mario 2022-01-06 11:38 ` Thorsten Leemhuis 2022-01-06 13:46 ` Limonciello, Mario
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.