From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chris Wilson Subject: [PATCH 21/30] drm/i915: Redirect GTT mappings to the CPU page if cache-coherent Date: Tue, 12 Apr 2011 21:31:49 +0100 Message-ID: <1302640318-23165-22-git-send-email-chris@chris-wilson.co.uk> References: <1302640318-23165-1-git-send-email-chris@chris-wilson.co.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTP id D7EC39EFCF for ; Tue, 12 Apr 2011 13:32:49 -0700 (PDT) In-Reply-To: <1302640318-23165-1-git-send-email-chris@chris-wilson.co.uk> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org Errors-To: intel-gfx-bounces+gcfxdi-intel-gfx=m.gmane.org@lists.freedesktop.org To: intel-gfx@lists.freedesktop.org List-Id: intel-gfx@lists.freedesktop.org ... or if we will need to perform a cache-flush on the object anyway. Unless, of course, we need to use a fence register to perform tiling operations during the transfer. Signed-off-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem.c | 34 ++++++++++++++++++++++++++++++++-- 1 files changed, 32 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 9d87258..2961f37 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1211,12 +1211,40 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) trace_i915_gem_object_fault(obj, page_offset, true, write); - /* Now bind it into the GTT if needed */ if (!obj->map_and_fenceable) { ret = i915_gem_object_unbind(obj); if (ret) goto unlock; } + + /* If it is unbound or we are currently writing through the CPU + * domain, continue to do so. + */ + if (obj->tiling_mode == I915_TILING_NONE && + (obj->cache_level != I915_CACHE_NONE || + obj->base.write_domain == I915_GEM_DOMAIN_CPU)) { + struct page *page; + + ret = i915_gem_object_set_to_cpu_domain(obj, write); + if (ret) + goto unlock; + + obj->dirty = 1; + obj->fault_mappable = true; + mutex_unlock(&dev->struct_mutex); + + page = read_cache_page_gfp(obj->base.filp->f_path.dentry->d_inode->i_mapping, + page_offset, + GFP_HIGHUSER | __GFP_RECLAIMABLE); + if (IS_ERR(page)) { + ret = PTR_ERR(page); + goto out; + } + + vmf->page = page; + return VM_FAULT_LOCKED; + } + if (!obj->gtt_space) { ret = i915_gem_object_bind_to_gtt(obj, 0, true); if (ret) @@ -3597,8 +3625,10 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, /* if the object is no longer bound, discard its backing storage */ if (i915_gem_object_is_purgeable(obj) && - obj->gtt_space == NULL) + obj->gtt_space == NULL) { + i915_gem_release_mmap(obj); i915_gem_object_truncate(obj); + } args->retained = obj->madv != __I915_MADV_PURGED; -- 1.7.4.1