amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Alex Deucher <alexdeucher@gmail.com>,
	Alex Deucher <alexander.deucher@amd.com>
Cc: amd-gfx list <amd-gfx@lists.freedesktop.org>,
	Michele Ballabio <ballabio.m@gmail.com>
Subject: Re: [PATCH] drm/amdgpu: don't runtime suspend if there are displays attached (v2)
Date: Wed, 20 Apr 2022 16:05:25 +0200	[thread overview]
Message-ID: <89aa325c-415a-5fce-bb23-b21c9fecfe8e@gmail.com> (raw)
In-Reply-To: <CADnq5_PO7v8nM2FueQNxmJfhuvHCLfhgsYaVeSHb0qmfppxbKw@mail.gmail.com>

I could ack it, but I'm absolutely not an expert on that stuff.

Our DC team maybe? Or anybody working more on the PM code?

Christian.

Am 20.04.22 um 15:57 schrieb Alex Deucher:
> Ping?  Anyone care to review this?
>
> Alex
>
> On Tue, Apr 19, 2022 at 9:47 AM Alex Deucher <alexdeucher@gmail.com> wrote:
>> Ping?
>>
>> On Wed, Apr 13, 2022 at 4:15 PM Alex Deucher <alexander.deucher@amd.com> wrote:
>>> We normally runtime suspend when there are displays attached if they
>>> are in the DPMS off state, however, if something wakes the GPU
>>> we send a hotplug event on resume (in case any displays were connected
>>> while the GPU was in suspend) which can cause userspace to light
>>> up the displays again soon after they were turned off.
>>>
>>> Prior to
>>> commit 087451f372bf76 ("drm/amdgpu: use generic fb helpers instead of setting up AMD own's."),
>>> the driver took a runtime pm reference when the fbdev emulation was
>>> enabled because we didn't implement proper shadowing support for
>>> vram access when the device was off so the device never runtime
>>> suspended when there was a console bound.  Once that commit landed,
>>> we now utilize the core fb helper implementation which properly
>>> handles the emulation, so runtime pm now suspends in cases where it did
>>> not before.  Ultimately, we need to sort out why runtime suspend in not
>>> working in this case for some users, but this should restore similar
>>> behavior to before.
>>>
>>> v2: move check into runtime_suspend
>>>
>>> Fixes: 087451f372bf76 ("drm/amdgpu: use generic fb helpers instead of setting up AMD own's.")
>>> Tested-by: Michele Ballabio <ballabio.m@gmail.com>
>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 107 ++++++++++++++++--------
>>>   1 file changed, 72 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> index 4efaa183abcd..97a1aa02d76e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>> @@ -2395,6 +2395,71 @@ static int amdgpu_pmops_restore(struct device *dev)
>>>          return amdgpu_device_resume(drm_dev, true);
>>>   }
>>>
>>> +static int amdgpu_runtime_idle_check_display(struct device *dev)
>>> +{
>>> +       struct pci_dev *pdev = to_pci_dev(dev);
>>> +       struct drm_device *drm_dev = pci_get_drvdata(pdev);
>>> +       struct amdgpu_device *adev = drm_to_adev(drm_dev);
>>> +
>>> +       if (adev->mode_info.num_crtc) {
>>> +               struct drm_connector *list_connector;
>>> +               struct drm_connector_list_iter iter;
>>> +               int ret = 0;
>>> +
>>> +               /* XXX: Return busy if any displays are connected to avoid
>>> +                * possible display wake ups after runtime resume due to
>>> +                * hotplug events in case any displays were connected while
>>> +                * the GPU was in suspend.  Remove this once that is fixed.
>>> +                */
>>> +               mutex_lock(&drm_dev->mode_config.mutex);
>>> +               drm_connector_list_iter_begin(drm_dev, &iter);
>>> +               drm_for_each_connector_iter(list_connector, &iter) {
>>> +                       if (list_connector->status == connector_status_connected) {
>>> +                               ret = -EBUSY;
>>> +                               break;
>>> +                       }
>>> +               }
>>> +               drm_connector_list_iter_end(&iter);
>>> +               mutex_unlock(&drm_dev->mode_config.mutex);
>>> +
>>> +               if (ret)
>>> +                       return ret;
>>> +
>>> +               if (amdgpu_device_has_dc_support(adev)) {
>>> +                       struct drm_crtc *crtc;
>>> +
>>> +                       drm_for_each_crtc(crtc, drm_dev) {
>>> +                               drm_modeset_lock(&crtc->mutex, NULL);
>>> +                               if (crtc->state->active)
>>> +                                       ret = -EBUSY;
>>> +                               drm_modeset_unlock(&crtc->mutex);
>>> +                               if (ret < 0)
>>> +                                       break;
>>> +                       }
>>> +               } else {
>>> +                       mutex_lock(&drm_dev->mode_config.mutex);
>>> +                       drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
>>> +
>>> +                       drm_connector_list_iter_begin(drm_dev, &iter);
>>> +                       drm_for_each_connector_iter(list_connector, &iter) {
>>> +                               if (list_connector->dpms ==  DRM_MODE_DPMS_ON) {
>>> +                                       ret = -EBUSY;
>>> +                                       break;
>>> +                               }
>>> +                       }
>>> +
>>> +                       drm_connector_list_iter_end(&iter);
>>> +
>>> +                       drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
>>> +                       mutex_unlock(&drm_dev->mode_config.mutex);
>>> +               }
>>> +               if (ret)
>>> +                       return ret;
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +
>>>   static int amdgpu_pmops_runtime_suspend(struct device *dev)
>>>   {
>>>          struct pci_dev *pdev = to_pci_dev(dev);
>>> @@ -2407,6 +2472,10 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
>>>                  return -EBUSY;
>>>          }
>>>
>>> +       ret = amdgpu_runtime_idle_check_display(dev);
>>> +       if (ret)
>>> +               return ret;
>>> +
>>>          /* wait for all rings to drain before suspending */
>>>          for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
>>>                  struct amdgpu_ring *ring = adev->rings[i];
>>> @@ -2516,41 +2585,9 @@ static int amdgpu_pmops_runtime_idle(struct device *dev)
>>>                  return -EBUSY;
>>>          }
>>>
>>> -       if (amdgpu_device_has_dc_support(adev)) {
>>> -               struct drm_crtc *crtc;
>>> -
>>> -               drm_for_each_crtc(crtc, drm_dev) {
>>> -                       drm_modeset_lock(&crtc->mutex, NULL);
>>> -                       if (crtc->state->active)
>>> -                               ret = -EBUSY;
>>> -                       drm_modeset_unlock(&crtc->mutex);
>>> -                       if (ret < 0)
>>> -                               break;
>>> -               }
>>> -
>>> -       } else {
>>> -               struct drm_connector *list_connector;
>>> -               struct drm_connector_list_iter iter;
>>> -
>>> -               mutex_lock(&drm_dev->mode_config.mutex);
>>> -               drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
>>> -
>>> -               drm_connector_list_iter_begin(drm_dev, &iter);
>>> -               drm_for_each_connector_iter(list_connector, &iter) {
>>> -                       if (list_connector->dpms ==  DRM_MODE_DPMS_ON) {
>>> -                               ret = -EBUSY;
>>> -                               break;
>>> -                       }
>>> -               }
>>> -
>>> -               drm_connector_list_iter_end(&iter);
>>> -
>>> -               drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
>>> -               mutex_unlock(&drm_dev->mode_config.mutex);
>>> -       }
>>> -
>>> -       if (ret == -EBUSY)
>>> -               DRM_DEBUG_DRIVER("failing to power off - crtc active\n");
>>> +       ret = amdgpu_runtime_idle_check_display(dev);
>>> +       if (ret)
>>> +               return ret;
>>>
>>>          pm_runtime_mark_last_busy(dev);
>>>          pm_runtime_autosuspend(dev);
>>> --
>>> 2.35.1
>>>


  reply	other threads:[~2022-04-20 14:05 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-13 20:15 [PATCH] drm/amdgpu: don't runtime suspend if there are displays attached (v2) Alex Deucher
2022-04-19 13:47 ` Alex Deucher
2022-04-20 13:57   ` Alex Deucher
2022-04-20 14:05     ` Christian König [this message]
2022-04-19 14:04 ` Paul Menzel
2022-04-19 14:44   ` Alex Deucher
2022-04-21  2:53 ` Quan, Evan
2022-04-21  3:00   ` Alex Deucher
2022-04-21  3:16 Alex Deucher
2022-04-21  4:40 ` Alex Deucher
2022-04-21  5:23 ` Quan, Evan
2022-04-21 11:11 ` Thorsten Leemhuis
2022-04-21 14:19   ` Alex Deucher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=89aa325c-415a-5fce-bb23-b21c9fecfe8e@gmail.com \
    --to=ckoenig.leichtzumerken@gmail.com \
    --cc=alexander.deucher@amd.com \
    --cc=alexdeucher@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=ballabio.m@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).