From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ben Widawsky Subject: Re: [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Date: Sun, 4 Sep 2011 21:38:56 +0000 Message-ID: <20110904213856.GA18071@cloud01> References: <20110904084953.16cd10a2@bwidawsk.net> <1315150502-12537-1-git-send-email-daniel.vetter@ffwll.ch> <1315150502-12537-3-git-send-email-daniel.vetter@ffwll.ch> <20110904100817.16c6c4cc@bwidawsk.net> <20110904192648.GB2799@phenom.ffwll.local> <20110904195657.GB17304@cloud01> <20110904201030.GE2799@phenom.ffwll.local> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from cloud01.chad-versace.us (184-106-247-128.static.cloud-ips.com [184.106.247.128]) by gabe.freedesktop.org (Postfix) with ESMTP id 2F9ED9EB35 for ; Sun, 4 Sep 2011 14:35:42 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20110904201030.GE2799@phenom.ffwll.local> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org To: Daniel Vetter Cc: Daniel Vetter , intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org On Sun, Sep 04, 2011 at 10:10:30PM +0200, Daniel Vetter wrote: > On Sun, Sep 04, 2011 at 07:56:57PM +0000, Ben Widawsky wrote: > > On Sun, Sep 04, 2011 at 09:26:48PM +0200, Daniel Vetter wrote: > > > On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote: > > > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > > > > index 55518e3..3bc1479 100644 > > > > --- a/drivers/gpu/drm/i915/i915_irq.c > > > > +++ b/drivers/gpu/drm/i915/i915_irq.c > > > > @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work) > > > > gen6_set_rps(dev_priv->dev, new_delay); > > > > dev_priv->cur_delay = new_delay; > > > > > > > > - /* > > > > - * rps_lock not held here because clearing is non-destructive. There is > > > > - * an *extremely* unlikely race with gen6_rps_enable() that is prevented > > > > - * by holding struct_mutex for the duration of the write. > > > > - */ > > > > - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir); > > > > + I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir); > > > > mutex_unlock(&dev_priv->dev->struct_mutex); > > > > } > > > > > > For this to work we'd need to hold the rps_lock (to avoid racing with the > > > irq handler). But imo my approach is conceptually simpler: The work func > > > grabs all oustanding PM interrupts and then enables them again in one go > > > (protected by rps_lock). > > > > I agree your approach is similar, but I think we should really consider > > whether my approach actually requires the lock. I *think* it doesn't. At > > least in my head my patch should fix the error you spotted. I don't > > know, maybe I need to think some more. > > 1. rps work reads dev_priv->pm_iir (anew in the line you've added). > 2. irq handler runs, adds a new bit to dev_priv->pm_iir and sets PMIMR to > dev_priv->pm_iir (under irqsafe rps_lock). > 3. rps work writes crap to PMIMR. > > I.e. same race, you've just dramatically reduced the window ;-) > > > The reason I worked so hard to avoid doing it the way you did in my > > original implementation is I was trying really hard to not break the > > cardinal rule about minimizing time holding spinlock_irqs. To go with > > the other patch, you probably want a POSTING_READ also before releasing > > the spin_lock (though I think this is being a bit paranoid). > > There POSTING_READ was to order the PMIMR write with the PMIIR write (both > in the irq handler). There's no such ordering here (and the irq handler > can't be interrupted) so I think we're save. > > -Daniel Oops, you're totally right, I think I meant: - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir); + I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir); With regarding to the POSTING_READ, the concern I had was if the write to IMR doesn't land before releasing the spinlock, but I don't feel like addressing that concern anymore. Ben