From mboxrd@z Thu Jan 1 00:00:00 1970 From: Daniel Vetter Subject: Re: [PATCH 2/3] drm/i915: close PM interrupt masking races in the rps work func Date: Sun, 4 Sep 2011 21:26:48 +0200 Message-ID: <20110904192648.GB2799@phenom.ffwll.local> References: <20110904084953.16cd10a2@bwidawsk.net> <1315150502-12537-1-git-send-email-daniel.vetter@ffwll.ch> <1315150502-12537-3-git-send-email-daniel.vetter@ffwll.ch> <20110904100817.16c6c4cc@bwidawsk.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail-ew0-f49.google.com (mail-ew0-f49.google.com [209.85.215.49]) by gabe.freedesktop.org (Postfix) with ESMTP id 33DD1A09EF for ; Sun, 4 Sep 2011 12:26:32 -0700 (PDT) Received: by ewy3 with SMTP id 3so2338843ewy.36 for ; Sun, 04 Sep 2011 12:26:31 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20110904100817.16c6c4cc@bwidawsk.net> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org To: Ben Widawsky Cc: Daniel Vetter , intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org On Sun, Sep 04, 2011 at 10:08:17AM -0700, Ben Widawsky wrote: > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > index 55518e3..3bc1479 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -415,12 +415,7 @@ static void gen6_pm_rps_work(struct work_struct *work) > gen6_set_rps(dev_priv->dev, new_delay); > dev_priv->cur_delay = new_delay; > > - /* > - * rps_lock not held here because clearing is non-destructive. There is > - * an *extremely* unlikely race with gen6_rps_enable() that is prevented > - * by holding struct_mutex for the duration of the write. > - */ > - I915_WRITE(GEN6_PMIMR, pm_imr & ~pm_iir); > + I915_WRITE(GEN6_PMIMR, pm_imr & dev_priv->pm_iir); > mutex_unlock(&dev_priv->dev->struct_mutex); > } For this to work we'd need to hold the rps_lock (to avoid racing with the irq handler). But imo my approach is conceptually simpler: The work func grabs all oustanding PM interrupts and then enables them again in one go (protected by rps_lock). And because the dev_priv->wq workqueue is single-threaded (no point in using multiple threads when all work items grab dev->struct mutex) we also cannot make a mess by running work items in the wrong order (or in parallel). -Daniel -- Daniel Vetter Mail: daniel@ffwll.ch Mobile: +41 (0)79 365 57 48