archive mirror
 help / color / mirror / Atom feed
From: Lyude Paul <>
Cc: Lukas Wunner <>,
	Peter Ujfalusi <>,, Ben Skeggs <>,
	David Airlie <>,,
Subject: [PATCH v8 1/5] drm/nouveau: Fix bogus drm_kms_helper_poll_enable() placement
Date: Wed, 15 Aug 2018 15:00:11 -0400	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

Turns out this part is my fault for not noticing when reviewing
9a2eba337cace ("drm/nouveau: Fix drm poll_helper handling"). Currently
we call drm_kms_helper_poll_enable() from nouveau_display_hpd_work().
This makes basically no sense however, because that means we're calling
drm_kms_helper_poll_enable() every time we schedule the hotplug
detection work. This is also against the advice mentioned in
drm_kms_helper_poll_enable()'s documentation:

 Note that calls to enable and disable polling must be strictly ordered,
 which is automatically the case when they're only call from
 suspend/resume callbacks.

Of course, hotplugs can't really be ordered. They could even happen
immediately after we called drm_kms_helper_poll_disable() in
nouveau_display_fini(), which can lead to all sorts of issues.

Additionally; enabling polling /after/ we call
drm_helper_hpd_irq_event() could also mean that we'd miss a hotplug
event anyway, since drm_helper_hpd_irq_event() wouldn't bother trying to
probe connectors so long as polling is disabled.

So; simply move this back into nouveau_display_init() again. The race
condition that both of these patches attempted to work around has
already been fixed properly in

  d61a5c106351 ("drm/nouveau: Fix deadlock on runtime suspend")

Fixes: 9a2eba337cace ("drm/nouveau: Fix drm poll_helper handling")
Signed-off-by: Lyude Paul <>
Acked-by: Karol Herbst <>
Acked-by: Daniel Vetter <>
Cc: Lukas Wunner <>
Cc: Peter Ujfalusi <>
 drivers/gpu/drm/nouveau/nouveau_display.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
index ec7861457b84..1d36ab5d4796 100644
--- a/drivers/gpu/drm/nouveau/nouveau_display.c
+++ b/drivers/gpu/drm/nouveau/nouveau_display.c
@@ -355,8 +355,6 @@ nouveau_display_hpd_work(struct work_struct *work)
-	/* enable polling for external displays */
-	drm_kms_helper_poll_enable(drm->dev);
@@ -411,6 +409,11 @@ nouveau_display_init(struct drm_device *dev)
 	if (ret)
 		return ret;
+	/* enable connector detection and polling for connectors without HPD
+	 * support
+	 */
+	drm_kms_helper_poll_enable(dev);
 	/* enable hotplug interrupts */
 	drm_connector_list_iter_begin(dev, &conn_iter);
 	nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {

  reply	other threads:[~2018-08-15 19:00 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-15 19:00 [PATCH v8 0/5] Fix connector probing deadlocks from RPM bugs Lyude Paul
2018-08-15 19:00 ` Lyude Paul [this message]
2018-08-15 19:00 ` [PATCH v8 2/5] drm/nouveau: Remove duplicate poll_enable() in pmops_runtime_suspend() Lyude Paul
2018-08-15 19:00 ` [PATCH v8 3/5] drm/nouveau: Fix deadlock with fb_helper with async RPM requests Lyude Paul
2018-08-15 19:00 ` [PATCH v8 4/5] drm/nouveau: Use pm_runtime_get_noresume() in connector_detect() Lyude Paul
2018-08-15 19:00 ` [PATCH v8 5/5] drm/nouveau: Fix deadlocks in nouveau_connector_detect() Lyude Paul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).