From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f197.google.com (mail-wr0-f197.google.com [209.85.128.197]) by kanga.kvack.org (Postfix) with ESMTP id E05CA6B03B5 for ; Tue, 6 Jun 2017 08:05:43 -0400 (EDT) Received: by mail-wr0-f197.google.com with SMTP id n7so5285308wrb.0 for ; Tue, 06 Jun 2017 05:05:43 -0700 (PDT) Received: from fireflyinternet.com (mail.fireflyinternet.com. [109.228.58.192]) by mx.google.com with ESMTPS id p12si36154588wrd.273.2017.06.06.05.05.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Jun 2017 05:05:42 -0700 (PDT) From: Chris Wilson Subject: [RFC] mm,drm/i915: Mark pinned shmemfs pages as unevictable Date: Tue, 6 Jun 2017 13:04:36 +0100 Message-Id: <20170606120436.8683-1-chris@chris-wilson.co.uk> Sender: owner-linux-mm@kvack.org List-ID: To: intel-gfx@lists.freedesktop.org, linux-mm@kvack.org Cc: Chris Wilson , Joonas Lahtinen , Matthew Auld , Dave Hansen , "Kirill A . Shutemov" , Andrew Morton , Michal Hocko Similar in principle to the treatment of get_user_pages, pages that i915.ko acquires from shmemfs are not immediately reclaimable and so should be excluded from the mm accounting and vmscan until they have been returned to the system via shrink_slab/i915_gem_shrink. By moving the unreclaimable pages off the inactive anon lru, not only should vmscan be improved by avoiding walking unreclaimable pages, but the system should also have a better idea of how much memory it can reclaim at that moment in time. Note, however, the interaction with shrink_slab which will move some mlocked pages back to the inactive anon lru. Suggested-by: Dave Hansen Signed-off-by: Chris Wilson Cc: Joonas Lahtinen Cc: Matthew Auld Cc: Dave Hansen Cc: "Kirill A . Shutemov" Cc: Andrew Morton Cc: Michal Hocko --- drivers/gpu/drm/i915/i915_gem.c | 17 ++++++++++++++++- mm/mlock.c | 2 ++ 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 8cb811519db1..37a98fbc6a12 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2193,6 +2193,9 @@ void __i915_gem_object_truncate(struct drm_i915_gem_object *obj) obj->mm.pages = ERR_PTR(-EFAULT); } +extern void mlock_vma_page(struct page *page); +extern unsigned int munlock_vma_page(struct page *page); + static void i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, struct sg_table *pages) @@ -2214,6 +2217,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, if (obj->mm.madv == I915_MADV_WILLNEED) mark_page_accessed(page); + lock_page(page); + munlock_vma_page(page); + unlock_page(page); + put_page(page); } obj->mm.dirty = false; @@ -2412,6 +2419,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) } last_pfn = page_to_pfn(page); + lock_page(page); + mlock_vma_page(page); + unlock_page(page); + /* Check that the i965g/gm workaround works. */ WARN_ON((gfp & __GFP_DMA32) && (last_pfn >= 0x00100000UL)); } @@ -2450,8 +2461,12 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) err_sg: sg_mark_end(sg); err_pages: - for_each_sgt_page(page, sgt_iter, st) + for_each_sgt_page(page, sgt_iter, st) { + lock_page(page); + munlock_vma_page(page); + unlock_page(page); put_page(page); + } sg_free_table(st); kfree(st); diff --git a/mm/mlock.c b/mm/mlock.c index b562b5523a65..531d9f8fd033 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -94,6 +94,7 @@ void mlock_vma_page(struct page *page) putback_lru_page(page); } } +EXPORT_SYMBOL_GPL(mlock_vma_page); /* * Isolate a page from LRU with optional get_page() pin. @@ -211,6 +212,7 @@ unsigned int munlock_vma_page(struct page *page) out: return nr_pages - 1; } +EXPORT_SYMBOL_GPL(munlock_vma_page); /* * convert get_user_pages() return value to posix mlock() error -- 2.11.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org