From: Alex Deucher <alexdeucher@gmail.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: Alex Deucher <alexander.deucher@amd.com>,
amd-gfx list <amd-gfx@lists.freedesktop.org>,
Michele Ballabio <ballabio.m@gmail.com>
Subject: Re: [PATCH] drm/amdgpu: don't runtime suspend if there are displays attached (v2)
Date: Tue, 19 Apr 2022 10:44:43 -0400 [thread overview]
Message-ID: <CADnq5_MnfM7h5KUnedgrYiYwu5O29UeJHGnGKuaQc9dHQv7MFQ@mail.gmail.com> (raw)
In-Reply-To: <6729c3d4-c3e9-d3d8-d23a-3892384794f6@molgen.mpg.de>
On Tue, Apr 19, 2022 at 10:04 AM Paul Menzel <pmenzel@molgen.mpg.de> wrote:
>
> Dear Alex,
>
>
> Thank you for the patch.
>
> Am 13.04.22 um 22:15 schrieb Alex Deucher:
> > We normally runtime suspend when there are displays attached if they
> > are in the DPMS off state, however, if something wakes the GPU
> > we send a hotplug event on resume (in case any displays were connected
> > while the GPU was in suspend) which can cause userspace to light
> > up the displays again soon after they were turned off.
> >
> > Prior to
> > commit 087451f372bf76 ("drm/amdgpu: use generic fb helpers instead of setting up AMD own's."),
> > the driver took a runtime pm reference when the fbdev emulation was
> > enabled because we didn't implement proper shadowing support for
> > vram access when the device was off so the device never runtime
> > suspended when there was a console bound. Once that commit landed,
> > we now utilize the core fb helper implementation which properly
> > handles the emulation, so runtime pm now suspends in cases where it did
> > not before. Ultimately, we need to sort out why runtime suspend in not
> > working in this case for some users, but this should restore similar
> > behavior to before.
> >
> > v2: move check into runtime_suspend
> >
> > Fixes: 087451f372bf76 ("drm/amdgpu: use generic fb helpers instead of setting up AMD own's.")
> > Tested-by: Michele Ballabio <ballabio.m@gmail.com>
>
> On what system and device?
It was a polaris dGPU, but it has been seen on other GPUs as well.
It's not device specific. The issue is hard to reproduce at least in
our testing unfortunately.
>
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 107 ++++++++++++++++--------
> > 1 file changed, 72 insertions(+), 35 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > index 4efaa183abcd..97a1aa02d76e 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > @@ -2395,6 +2395,71 @@ static int amdgpu_pmops_restore(struct device *dev)
> > return amdgpu_device_resume(drm_dev, true);
> > }
> >
> > +static int amdgpu_runtime_idle_check_display(struct device *dev)
> > +{
> > + struct pci_dev *pdev = to_pci_dev(dev);
> > + struct drm_device *drm_dev = pci_get_drvdata(pdev);
> > + struct amdgpu_device *adev = drm_to_adev(drm_dev);
> > +
> > + if (adev->mode_info.num_crtc) {
> > + struct drm_connector *list_connector;
> > + struct drm_connector_list_iter iter;
> > + int ret = 0;
> > +
> > + /* XXX: Return busy if any displays are connected to avoid
> > + * possible display wake ups after runtime resume due to
>
> Nit: wakeups
Ack.
>
> > + * hotplug events in case any displays were connected while
> > + * the GPU was in suspend. Remove this once that is fixed.
> > + */
>
> Do you have an (internal) issue to track this?
Yes, we are tracking it.
Alex
>
> > + mutex_lock(&drm_dev->mode_config.mutex);
> > + drm_connector_list_iter_begin(drm_dev, &iter);
> > + drm_for_each_connector_iter(list_connector, &iter) {
> > + if (list_connector->status == connector_status_connected) {
> > + ret = -EBUSY;
> > + break;
> > + }
> > + }
> > + drm_connector_list_iter_end(&iter);
> > + mutex_unlock(&drm_dev->mode_config.mutex);
> > +
> > + if (ret)
> > + return ret;
> > +
> > + if (amdgpu_device_has_dc_support(adev)) {
> > + struct drm_crtc *crtc;
> > +
> > + drm_for_each_crtc(crtc, drm_dev) {
> > + drm_modeset_lock(&crtc->mutex, NULL);
> > + if (crtc->state->active)
> > + ret = -EBUSY;
> > + drm_modeset_unlock(&crtc->mutex);
> > + if (ret < 0)
> > + break;
> > + }
> > + } else {
> > + mutex_lock(&drm_dev->mode_config.mutex);
> > + drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
> > +
> > + drm_connector_list_iter_begin(drm_dev, &iter);
> > + drm_for_each_connector_iter(list_connector, &iter) {
> > + if (list_connector->dpms == DRM_MODE_DPMS_ON) {
> > + ret = -EBUSY;
> > + break;
> > + }
> > + }
> > +
> > + drm_connector_list_iter_end(&iter);
> > +
> > + drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
> > + mutex_unlock(&drm_dev->mode_config.mutex);
> > + }
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > static int amdgpu_pmops_runtime_suspend(struct device *dev)
> > {
> > struct pci_dev *pdev = to_pci_dev(dev);
> > @@ -2407,6 +2472,10 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
> > return -EBUSY;
> > }
> >
> > + ret = amdgpu_runtime_idle_check_display(dev);
> > + if (ret)
> > + return ret;
> > +
> > /* wait for all rings to drain before suspending */
> > for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
> > struct amdgpu_ring *ring = adev->rings[i];
> > @@ -2516,41 +2585,9 @@ static int amdgpu_pmops_runtime_idle(struct device *dev)
> > return -EBUSY;
> > }
> >
> > - if (amdgpu_device_has_dc_support(adev)) {
> > - struct drm_crtc *crtc;
> > -
> > - drm_for_each_crtc(crtc, drm_dev) {
> > - drm_modeset_lock(&crtc->mutex, NULL);
> > - if (crtc->state->active)
> > - ret = -EBUSY;
> > - drm_modeset_unlock(&crtc->mutex);
> > - if (ret < 0)
> > - break;
> > - }
> > -
> > - } else {
> > - struct drm_connector *list_connector;
> > - struct drm_connector_list_iter iter;
> > -
> > - mutex_lock(&drm_dev->mode_config.mutex);
> > - drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
> > -
> > - drm_connector_list_iter_begin(drm_dev, &iter);
> > - drm_for_each_connector_iter(list_connector, &iter) {
> > - if (list_connector->dpms == DRM_MODE_DPMS_ON) {
> > - ret = -EBUSY;
> > - break;
> > - }
> > - }
> > -
> > - drm_connector_list_iter_end(&iter);
> > -
> > - drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
> > - mutex_unlock(&drm_dev->mode_config.mutex);
> > - }
> > -
> > - if (ret == -EBUSY)
> > - DRM_DEBUG_DRIVER("failing to power off - crtc active\n");
> > + ret = amdgpu_runtime_idle_check_display(dev);
> > + if (ret)
> > + return ret;
> >
> > pm_runtime_mark_last_busy(dev);
> > pm_runtime_autosuspend(dev);
>
> The overall change looks good.
>
>
> Kind regards,
>
> Paul
next prev parent reply other threads:[~2022-04-19 14:44 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-13 20:15 [PATCH] drm/amdgpu: don't runtime suspend if there are displays attached (v2) Alex Deucher
2022-04-19 13:47 ` Alex Deucher
2022-04-20 13:57 ` Alex Deucher
2022-04-20 14:05 ` Christian König
2022-04-19 14:04 ` Paul Menzel
2022-04-19 14:44 ` Alex Deucher [this message]
2022-04-21 2:53 ` Quan, Evan
2022-04-21 3:00 ` Alex Deucher
2022-04-21 3:16 Alex Deucher
2022-04-21 4:40 ` Alex Deucher
2022-04-21 5:23 ` Quan, Evan
2022-04-21 11:11 ` Thorsten Leemhuis
2022-04-21 14:19 ` Alex Deucher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CADnq5_MnfM7h5KUnedgrYiYwu5O29UeJHGnGKuaQc9dHQv7MFQ@mail.gmail.com \
--to=alexdeucher@gmail.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=ballabio.m@gmail.com \
--cc=pmenzel@molgen.mpg.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).