From: Daniel Vetter <daniel@ffwll.ch>
To: Ben Widawsky <ben@bwidawsk.net>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func
Date: Sun, 4 Sep 2011 21:26:48 +0200 [thread overview]
Message-ID: <20110904192648.GB2799@phenom.ffwll.local> (raw)
In-Reply-To: <20110904100817.16c6c4cc@bwidawsk.net>
On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote:
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 55518e3..3bc1479 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
> gen6_set_rps(dev_priv->dev, new_delay);
> dev_priv->cur_delay = new_delay;
>
> - /*
> - * rps_lock not held here because clearing is non-destructive. There is
> - * an *extremely* unlikely race with gen6_rps_enable() that is prevented
> - * by holding struct_mutex for the duration of the write.
> - */
> - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir);
> + I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir);
> mutex_unlock(&dev_priv->dev->struct_mutex);
> }
For this to work we'd need to hold the rps_lock (to avoid racing with the
irq handler). But imo my approach is conceptually simpler: The work func
grabs all oustanding PM interrupts and then enables them again in one go
(protected by rps_lock). And because the dev_priv->wq workqueue is
single-threaded (no point in using multiple threads when all work items
grab dev->struct mutex) we also cannot make a mess by running work items
in the wrong order (or in parallel).
-Daniel
--
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48
next prev parent reply other threads:[~2011-09-04 19:26 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-04 3:24 [PATCH] drm/i915: Fix rps irq warning Ben Widawsky
2011-09-04 9:03 ` Chris Wilson
2011-09-04 15:49 ` Ben Widawsky
2011-09-04 15:34 ` [PATCH 0/3] slaughter rps races some more Daniel Vetter
2011-09-04 15:35 ` [PATCH 1/3] drm/i915: close PM interrupt masking races in the irq handler Daniel Vetter
2011-09-04 17:09 ` Ben Widawsky
2011-09-04 15:35 ` [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Daniel Vetter
2011-09-04 17:08 ` Ben Widawsky
2011-09-04 19:26 ` Daniel Vetter [this message]
2011-09-04 19:56 ` Ben Widawsky
2011-09-04 20:10 ` Daniel Vetter
2011-09-04 21:38 ` Ben Widawsky
2011-09-05 6:38 ` Daniel Vetter
2011-09-05 6:51 ` Ben Widawsky
2011-09-05 12:15 ` Chris Wilson
2011-09-04 15:35 ` [PATCH 3/3] drm/i915: close rps work vs. rps disable races Daniel Vetter
2011-09-04 17:23 ` Ben Widawsky
2011-09-04 19:17 ` Daniel Vetter
2011-09-04 19:50 ` Ben Widawsky
2011-09-04 19:57 ` Daniel Vetter
2011-09-05 8:15 ` [PATCH] drm/i915: properly cancel rps_work on module unload Daniel Vetter
2011-09-05 17:27 ` Ben Widawsky
2011-09-08 12:00 [PATCH 1/3] drm/i915: close PM interrupt masking races in the irq handler Daniel Vetter
2011-09-08 12:00 ` [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Daniel Vetter
2011-09-08 15:19 ` Ben Widawsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110904192648.GB2799@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=ben@bwidawsk.net \
--cc=daniel.vetter@ffwll.ch \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).