From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752862AbdCMIzM (ORCPT ); Mon, 13 Mar 2017 04:55:12 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:40840 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752580AbdCMIqw (ORCPT ); Mon, 13 Mar 2017 04:46:52 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Chris Wilson , Tvrtko Ursulin , Jani Nikula Subject: [PATCH 4.10 63/75] drm/i915: Recreate internal objects with single page segments if dmar fails Date: Mon, 13 Mar 2017 16:44:12 +0800 Message-Id: <20170313083414.853110876@linuxfoundation.org> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20170313083411.408297387@linuxfoundation.org> References: <20170313083411.408297387@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Chris Wilson commit 2d2cfc12b1270c8451edc7d2dd5f79097b3a17d8 upstream. If we fail to dma-map the object, the most common cause is lack of space inside the SW-IOTLB due to fragmentation. If we recreate the_sg_table using segments of PAGE_SIZE (and single page allocations), we may succeed in remapping the scatterlist. First became a significant problem for the mock selftests after commit 5584f1b1d73e ("drm/i915: fix i915 running as dom0 under Xen") increased the max_order. Fixes: 920cf4194954 ("drm/i915: Introduce an internal allocator for disposable private objects") Fixes: 5584f1b1d73e ("drm/i915: fix i915 running as dom0 under Xen") Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Link: http://patchwork.freedesktop.org/patch/msgid/20170202132721.12711-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin (cherry picked from commit bb96dcf5830e5d81a1da2e2a14e6c0f7dfc64348) Signed-off-by: Jani Nikula Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/i915_gem_internal.c | 37 +++++++++++++++++++------------ 1 file changed, 23 insertions(+), 14 deletions(-) --- a/drivers/gpu/drm/i915/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/i915_gem_internal.c @@ -46,24 +46,12 @@ static struct sg_table * i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = to_i915(obj->base.dev); - unsigned int npages = obj->base.size / PAGE_SIZE; struct sg_table *st; struct scatterlist *sg; + unsigned int npages; int max_order; gfp_t gfp; - st = kmalloc(sizeof(*st), GFP_KERNEL); - if (!st) - return ERR_PTR(-ENOMEM); - - if (sg_alloc_table(st, npages, GFP_KERNEL)) { - kfree(st); - return ERR_PTR(-ENOMEM); - } - - sg = st->sgl; - st->nents = 0; - max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { @@ -85,6 +73,20 @@ i915_gem_object_get_pages_internal(struc gfp |= __GFP_DMA32; } +create_st: + st = kmalloc(sizeof(*st), GFP_KERNEL); + if (!st) + return ERR_PTR(-ENOMEM); + + npages = obj->base.size / PAGE_SIZE; + if (sg_alloc_table(st, npages, GFP_KERNEL)) { + kfree(st); + return ERR_PTR(-ENOMEM); + } + + sg = st->sgl; + st->nents = 0; + do { int order = min(fls(npages) - 1, max_order); struct page *page; @@ -112,8 +114,15 @@ i915_gem_object_get_pages_internal(struc sg = __sg_next(sg); } while (1); - if (i915_gem_gtt_prepare_pages(obj, st)) + if (i915_gem_gtt_prepare_pages(obj, st)) { + /* Failed to dma-map try again with single page sg segments */ + if (get_order(st->sgl->length)) { + internal_free_pages(st); + max_order = 0; + goto create_st; + } goto err; + } /* Mark the pages as dontneed whilst they are still pinned. As soon * as they are unpinned they are allowed to be reaped by the shrinker,