intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Ben Widawsky <ben@bwidawsk.net>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func
Date: Sun, 4 Sep 2011 21:38:56 +0000	[thread overview]
Message-ID: <20110904213856.GA18071@cloud01> (raw)
In-Reply-To: <20110904201030.GE2799@phenom.ffwll.local>

On Sun, Sep 04, 2011 at 10:10:30PM +0200, Daniel Vetter wrote:
> On Sun, Sep 04, 2011 at 07:56:57PM +0000, Ben Widawsky wrote:
> > On Sun, Sep 04, 2011 at 09:26:48PM +0200, Daniel Vetter wrote:
> > > On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote:
> > > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> > > > index 55518e3..3bc1479 100644
> > > > --- a/drivers/gpu/drm/i915/i915_irq.c
> > > > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > > > @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
> > > >         gen6_set_rps(dev_priv->dev, new_delay);
> > > >         dev_priv->cur_delay = new_delay;
> > > >  
> > > > -       /*
> > > > -        * rps_lock not held here because clearing is non-destructive. There is
> > > > -        * an *extremely* unlikely race with gen6_rps_enable() that is prevented
> > > > -        * by holding struct_mutex for the duration of the write.
> > > > -        */
> > > > -       I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir);
> > > > +       I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir);
> > > >         mutex_unlock(&dev_priv->dev->struct_mutex);
> > > >  }
> > > 
> > > For this to work we'd need to hold the rps_lock (to avoid racing with the
> > > irq handler). But imo my approach is conceptually simpler: The work func
> > > grabs all oustanding PM interrupts and then enables them again in one go
> > > (protected by rps_lock).
> > 
> > I agree your approach is similar, but I think we should really consider
> > whether my approach actually requires the lock. I *think* it doesn't. At
> > least in my head my patch should fix the error you spotted. I don't
> > know, maybe I need to think some more.
> 
> 1. rps work reads dev_priv->pm_iir (anew in the line you've added).
> 2. irq handler runs, adds a new bit to dev_priv->pm_iir and sets PMIMR to
> dev_priv->pm_iir (under irqsafe rps_lock).
> 3. rps work writes crap to PMIMR.
> 
> I.e. same race, you've just dramatically reduced the window ;-)
> 
> > The reason I worked so hard to avoid doing it the way you did in my
> > original implementation is I was trying really hard to not break the
> > cardinal rule about minimizing time holding spinlock_irqs. To go with
> > the other patch, you probably want a POSTING_READ also before releasing
> > the spin_lock (though I think this is being a bit paranoid).
> 
> There POSTING_READ was to order the PMIMR write with the PMIIR write (both
> in the irq handler). There's no such ordering here (and the irq handler
> can't be interrupted) so I think we're save.
> 
> -Daniel

Oops, you're totally right, I think I meant:
-       I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir);
+       I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);

With regarding to the POSTING_READ, the concern I had was if the write
to IMR doesn't land before releasing the spinlock, but I don't feel like
addressing that concern anymore.

Ben

  reply	other threads:[~2011-09-04 21:35 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-04  3:24 [PATCH] drm/i915: Fix rps irq warning Ben Widawsky
2011-09-04  9:03 ` Chris Wilson
2011-09-04 15:49   ` Ben Widawsky
2011-09-04 15:34     ` [PATCH 0/3] slaughter rps races some more Daniel Vetter
2011-09-04 15:35       ` [PATCH 1/3] drm/i915: close PM interrupt masking races in the irq handler Daniel Vetter
2011-09-04 17:09         ` Ben Widawsky
2011-09-04 15:35       ` [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Daniel Vetter
2011-09-04 17:08         ` Ben Widawsky
2011-09-04 19:26           ` Daniel Vetter
2011-09-04 19:56             ` Ben Widawsky
2011-09-04 20:10               ` Daniel Vetter
2011-09-04 21:38                 ` Ben Widawsky [this message]
2011-09-05  6:38                   ` Daniel Vetter
2011-09-05  6:51                     ` Ben Widawsky
2011-09-05 12:15                       ` Chris Wilson
2011-09-04 15:35       ` [PATCH 3/3] drm/i915: close rps work vs. rps disable races Daniel Vetter
2011-09-04 17:23         ` Ben Widawsky
2011-09-04 19:17           ` Daniel Vetter
2011-09-04 19:50             ` Ben Widawsky
2011-09-04 19:57               ` Daniel Vetter
2011-09-05  8:15                 ` [PATCH] drm/i915: properly cancel rps_work on module unload Daniel Vetter
2011-09-05 17:27                   ` Ben Widawsky
2011-09-08 12:00 [PATCH 1/3] drm/i915: close PM interrupt masking races in the irq handler Daniel Vetter
2011-09-08 12:00 ` [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Daniel Vetter
2011-09-08 15:19   ` Ben Widawsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110904213856.GA18071@cloud01 \
    --to=ben@bwidawsk.net \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel@ffwll.ch \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).