From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ben Widawsky Subject: [PATCH 04/10] drm/i915: drop polled waits from i915_wait_request Date: Fri, 20 Apr 2012 18:23:26 -0700 Message-ID: <1334971412-4826-5-git-send-email-ben@bwidawsk.net> References: <1334971412-4826-1-git-send-email-ben@bwidawsk.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from cloud01.chad-versace.us (184-106-247-128.static.cloud-ips.com [184.106.247.128]) by gabe.freedesktop.org (Postfix) with ESMTP id A07A49E85A for ; Fri, 20 Apr 2012 18:25:16 -0700 (PDT) In-Reply-To: <1334971412-4826-1-git-send-email-ben@bwidawsk.net> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org To: intel-gfx@lists.freedesktop.org Cc: Ben Widawsky , Ben Widawsky List-Id: intel-gfx@lists.freedesktop.org The only time irq_get should fail is during unload or suspend. Both of these points should try to quiesce the GPU before disabling interrupts and so the atomic polling should never occur. This was recommended by Chris Wilson as a way of reducing added complexity to the polled wait which I introduced in an RFC patch. 09:57 < ickle_> it's only there as a fudge for waiting after irqs after uninstalled during s&r, we aren't actually meant to hit it 09:57 < ickle_> so maybe we should just kill the code there and fix the breakage Cc: Chris Wilson Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index a86a6ee..6ad52d3 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1892,22 +1892,19 @@ i915_wait_request(struct intel_ring_buffer *ring, if (!i915_seqno_passed(ring->get_seqno(ring), seqno)) { trace_i915_gem_request_wait_begin(ring, seqno); - if (ring->irq_get(ring)) { - if (dev_priv->mm.interruptible) - ret = wait_event_interruptible(ring->irq_queue, - i915_seqno_passed(ring->get_seqno(ring), seqno) - || atomic_read(&dev_priv->mm.wedged)); - else - wait_event(ring->irq_queue, - i915_seqno_passed(ring->get_seqno(ring), seqno) - || atomic_read(&dev_priv->mm.wedged)); + if (WARN_ON(!ring->irq_get(ring))) + return -EBUSY; - ring->irq_put(ring); - } else if (wait_for_atomic(i915_seqno_passed(ring->get_seqno(ring), - seqno) || - atomic_read(&dev_priv->mm.wedged), 3000)) - ret = -EBUSY; + if (dev_priv->mm.interruptible) + ret = wait_event_interruptible(ring->irq_queue, + i915_seqno_passed(ring->get_seqno(ring), seqno) + || atomic_read(&dev_priv->mm.wedged)); + else + wait_event(ring->irq_queue, + i915_seqno_passed(ring->get_seqno(ring), seqno) + || atomic_read(&dev_priv->mm.wedged)); + ring->irq_put(ring); trace_i915_gem_request_wait_end(ring, seqno); } if (atomic_read(&dev_priv->mm.wedged)) -- 1.7.10