All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06  9:30 ` Kuo-Hsin Yang
  0 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06  9:30 UTC (permalink / raw)
  To: linux-kernel, intel-gfx, linux-mm
  Cc: Kuo-Hsin Yang, Chris Wilson, Joonas Lahtinen, Peter Zijlstra,
	Andrew Morton, Dave Hansen, Michal Hocko

The i915 driver uses shmemfs to allocate backing storage for gem
objects. These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. In some extreme case,
all pages in the inactive anon lru are pinned, and only the inactive
anon lru is scanned due to inactive_ratio, the system cannot swap and
invokes the oom-killer. Mark these pinned pages as unevictable to speed
up vmscan.

Export pagevec API check_move_unevictable_pages().

This patch was inspired by Chris Wilson's change [1].

[1]: https://patchwork.kernel.org/patch/9768741/

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com> # mm part
---
Changes for v6:
 Tweak the acked-by.

Changes for v5:
 Modify doc and comments. Remove the ifdef surrounding
 check_move_unevictable_pages.

Changes for v4:
 Export pagevec API check_move_unevictable_pages().

Changes for v3:
 Use check_move_lru_page instead of shmem_unlock_mapping to move pages
 to appropriate lru lists.

Changes for v2:
 Squashed the two patches.

 Documentation/vm/unevictable-lru.rst |  6 +++++-
 drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
 include/linux/swap.h                 |  4 +++-
 mm/shmem.c                           |  2 +-
 mm/vmscan.c                          | 22 +++++++++++-----------
 5 files changed, 46 insertions(+), 16 deletions(-)

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..b8e29f977f2d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@ using a number of wrapper functions:
 	Query the address space, and return true if it is completely
 	unevictable.
 
-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:
 
  (1) By ramfs to mark the address spaces of its inodes when they are created,
      and this mark remains for the life of the inode.
@@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
      swapped out; the application must touch the pages manually if it wants to
      ensure they're in memory.
 
+ (3) By the i915 driver to mark pinned address space until it's unpinned. The
+     amount of unevictable memory marked by i915 driver is roughly the bounded
+     object size in debugfs/dri/0/i915_gem_objects.
+
 
 Detecting Unevictable Pages
 ---------------------------
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0c8aa57ce83b..c620891e0d02 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
 	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
 }
 
+/**
+ * Move pages to appropriate lru and release the pagevec. Decrement the ref
+ * count of these pages.
+ */
+static inline void check_release_pagevec(struct pagevec *pvec)
+{
+	if (pagevec_count(pvec)) {
+		check_move_unevictable_pages(pvec);
+		__pagevec_release(pvec);
+	}
+}
+
 static void
 i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 			      struct sg_table *pages)
 {
 	struct sgt_iter sgt_iter;
 	struct page *page;
+	struct pagevec pvec;
 
 	__i915_gem_object_release_shmem(obj, pages, true);
 
@@ -2395,6 +2408,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 	if (i915_gem_object_needs_bit17_swizzle(obj))
 		i915_gem_object_save_bit_17_swizzle(obj, pages);
 
+	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
+
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, pages) {
 		if (obj->mm.dirty)
 			set_page_dirty(page);
@@ -2402,8 +2418,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 		if (obj->mm.madv == I915_MADV_WILLNEED)
 			mark_page_accessed(page);
 
-		put_page(page);
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
 	}
+	check_release_pagevec(&pvec);
 	obj->mm.dirty = false;
 
 	sg_free_table(pages);
@@ -2526,6 +2544,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	unsigned int sg_page_sizes;
 	gfp_t noreclaim;
 	int ret;
+	struct pagevec pvec;
 
 	/*
 	 * Assert that the object is not currently in any GPU domain. As it
@@ -2559,6 +2578,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	 * Fail silently without starting the shrinker
 	 */
 	mapping = obj->base.filp->f_mapping;
+	mapping_set_unevictable(mapping);
 	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
 	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
 
@@ -2673,8 +2693,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 err_sg:
 	sg_mark_end(sg);
 err_pages:
+	mapping_clear_unevictable(mapping);
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, st)
-		put_page(page);
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
+	check_release_pagevec(&pvec);
 	sg_free_table(st);
 	kfree(st);
 
diff --git a/include/linux/swap.h b/include/linux/swap.h
index d8a07a4f171d..a8f6d5d89524 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -18,6 +18,8 @@ struct notifier_block;
 
 struct bio;
 
+struct pagevec;
+
 #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
 #define SWAP_FLAG_PRIO_MASK	0x7fff
 #define SWAP_FLAG_PRIO_SHIFT	0
@@ -369,7 +371,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
 #endif
 
 extern int page_evictable(struct page *page);
-extern void check_move_unevictable_pages(struct page **, int nr_pages);
+extern void check_move_unevictable_pages(struct pagevec *pvec);
 
 extern int kswapd_run(int nid);
 extern void kswapd_stop(int nid);
diff --git a/mm/shmem.c b/mm/shmem.c
index ea26d7a0342d..de4893c904a3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -756,7 +756,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
 			break;
 		index = indices[pvec.nr - 1] + 1;
 		pagevec_remove_exceptionals(&pvec);
-		check_move_unevictable_pages(pvec.pages, pvec.nr);
+		check_move_unevictable_pages(&pvec);
 		pagevec_release(&pvec);
 		cond_resched();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 62ac0c488624..d070f431ff19 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -50,6 +50,7 @@
 #include <linux/printk.h>
 #include <linux/dax.h>
 #include <linux/psi.h>
+#include <linux/pagevec.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4182,17 +4183,16 @@ int page_evictable(struct page *page)
 	return ret;
 }
 
-#ifdef CONFIG_SHMEM
 /**
- * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
- * @pages:	array of pages to check
- * @nr_pages:	number of pages to check
+ * check_move_unevictable_pages - check pages for evictability and move to
+ * appropriate zone lru list
+ * @pvec: pagevec with lru pages to check
  *
- * Checks pages for evictability and moves them to the appropriate lru list.
- *
- * This function is only used for SysV IPC SHM_UNLOCK.
+ * Checks pages for evictability, if an evictable page is in the unevictable
+ * lru list, moves it to the appropriate evictable lru list. This function
+ * should be only used for lru pages.
  */
-void check_move_unevictable_pages(struct page **pages, int nr_pages)
+void check_move_unevictable_pages(struct pagevec *pvec)
 {
 	struct lruvec *lruvec;
 	struct pglist_data *pgdat = NULL;
@@ -4200,8 +4200,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 	int pgrescued = 0;
 	int i;
 
-	for (i = 0; i < nr_pages; i++) {
-		struct page *page = pages[i];
+	for (i = 0; i < pvec->nr; i++) {
+		struct page *page = pvec->pages[i];
 		struct pglist_data *pagepgdat = page_pgdat(page);
 
 		pgscanned++;
@@ -4233,4 +4233,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 		spin_unlock_irq(&pgdat->lru_lock);
 	}
 }
-#endif /* CONFIG_SHMEM */
+EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
-- 
2.19.1.930.g4563a0d9d0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06  9:30 ` Kuo-Hsin Yang
  0 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06  9:30 UTC (permalink / raw)
  To: linux-kernel, intel-gfx, linux-mm
  Cc: Michal Hocko, Peter Zijlstra, Dave Hansen, Andrew Morton

The i915 driver uses shmemfs to allocate backing storage for gem
objects. These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. In some extreme case,
all pages in the inactive anon lru are pinned, and only the inactive
anon lru is scanned due to inactive_ratio, the system cannot swap and
invokes the oom-killer. Mark these pinned pages as unevictable to speed
up vmscan.

Export pagevec API check_move_unevictable_pages().

This patch was inspired by Chris Wilson's change [1].

[1]: https://patchwork.kernel.org/patch/9768741/

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com> # mm part
---
Changes for v6:
 Tweak the acked-by.

Changes for v5:
 Modify doc and comments. Remove the ifdef surrounding
 check_move_unevictable_pages.

Changes for v4:
 Export pagevec API check_move_unevictable_pages().

Changes for v3:
 Use check_move_lru_page instead of shmem_unlock_mapping to move pages
 to appropriate lru lists.

Changes for v2:
 Squashed the two patches.

 Documentation/vm/unevictable-lru.rst |  6 +++++-
 drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
 include/linux/swap.h                 |  4 +++-
 mm/shmem.c                           |  2 +-
 mm/vmscan.c                          | 22 +++++++++++-----------
 5 files changed, 46 insertions(+), 16 deletions(-)

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..b8e29f977f2d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@ using a number of wrapper functions:
 	Query the address space, and return true if it is completely
 	unevictable.
 
-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:
 
  (1) By ramfs to mark the address spaces of its inodes when they are created,
      and this mark remains for the life of the inode.
@@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
      swapped out; the application must touch the pages manually if it wants to
      ensure they're in memory.
 
+ (3) By the i915 driver to mark pinned address space until it's unpinned. The
+     amount of unevictable memory marked by i915 driver is roughly the bounded
+     object size in debugfs/dri/0/i915_gem_objects.
+
 
 Detecting Unevictable Pages
 ---------------------------
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0c8aa57ce83b..c620891e0d02 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
 	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
 }
 
+/**
+ * Move pages to appropriate lru and release the pagevec. Decrement the ref
+ * count of these pages.
+ */
+static inline void check_release_pagevec(struct pagevec *pvec)
+{
+	if (pagevec_count(pvec)) {
+		check_move_unevictable_pages(pvec);
+		__pagevec_release(pvec);
+	}
+}
+
 static void
 i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 			      struct sg_table *pages)
 {
 	struct sgt_iter sgt_iter;
 	struct page *page;
+	struct pagevec pvec;
 
 	__i915_gem_object_release_shmem(obj, pages, true);
 
@@ -2395,6 +2408,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 	if (i915_gem_object_needs_bit17_swizzle(obj))
 		i915_gem_object_save_bit_17_swizzle(obj, pages);
 
+	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
+
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, pages) {
 		if (obj->mm.dirty)
 			set_page_dirty(page);
@@ -2402,8 +2418,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 		if (obj->mm.madv == I915_MADV_WILLNEED)
 			mark_page_accessed(page);
 
-		put_page(page);
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
 	}
+	check_release_pagevec(&pvec);
 	obj->mm.dirty = false;
 
 	sg_free_table(pages);
@@ -2526,6 +2544,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	unsigned int sg_page_sizes;
 	gfp_t noreclaim;
 	int ret;
+	struct pagevec pvec;
 
 	/*
 	 * Assert that the object is not currently in any GPU domain. As it
@@ -2559,6 +2578,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	 * Fail silently without starting the shrinker
 	 */
 	mapping = obj->base.filp->f_mapping;
+	mapping_set_unevictable(mapping);
 	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
 	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
 
@@ -2673,8 +2693,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 err_sg:
 	sg_mark_end(sg);
 err_pages:
+	mapping_clear_unevictable(mapping);
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, st)
-		put_page(page);
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
+	check_release_pagevec(&pvec);
 	sg_free_table(st);
 	kfree(st);
 
diff --git a/include/linux/swap.h b/include/linux/swap.h
index d8a07a4f171d..a8f6d5d89524 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -18,6 +18,8 @@ struct notifier_block;
 
 struct bio;
 
+struct pagevec;
+
 #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
 #define SWAP_FLAG_PRIO_MASK	0x7fff
 #define SWAP_FLAG_PRIO_SHIFT	0
@@ -369,7 +371,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
 #endif
 
 extern int page_evictable(struct page *page);
-extern void check_move_unevictable_pages(struct page **, int nr_pages);
+extern void check_move_unevictable_pages(struct pagevec *pvec);
 
 extern int kswapd_run(int nid);
 extern void kswapd_stop(int nid);
diff --git a/mm/shmem.c b/mm/shmem.c
index ea26d7a0342d..de4893c904a3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -756,7 +756,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
 			break;
 		index = indices[pvec.nr - 1] + 1;
 		pagevec_remove_exceptionals(&pvec);
-		check_move_unevictable_pages(pvec.pages, pvec.nr);
+		check_move_unevictable_pages(&pvec);
 		pagevec_release(&pvec);
 		cond_resched();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 62ac0c488624..d070f431ff19 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -50,6 +50,7 @@
 #include <linux/printk.h>
 #include <linux/dax.h>
 #include <linux/psi.h>
+#include <linux/pagevec.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4182,17 +4183,16 @@ int page_evictable(struct page *page)
 	return ret;
 }
 
-#ifdef CONFIG_SHMEM
 /**
- * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
- * @pages:	array of pages to check
- * @nr_pages:	number of pages to check
+ * check_move_unevictable_pages - check pages for evictability and move to
+ * appropriate zone lru list
+ * @pvec: pagevec with lru pages to check
  *
- * Checks pages for evictability and moves them to the appropriate lru list.
- *
- * This function is only used for SysV IPC SHM_UNLOCK.
+ * Checks pages for evictability, if an evictable page is in the unevictable
+ * lru list, moves it to the appropriate evictable lru list. This function
+ * should be only used for lru pages.
  */
-void check_move_unevictable_pages(struct page **pages, int nr_pages)
+void check_move_unevictable_pages(struct pagevec *pvec)
 {
 	struct lruvec *lruvec;
 	struct pglist_data *pgdat = NULL;
@@ -4200,8 +4200,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 	int pgrescued = 0;
 	int i;
 
-	for (i = 0; i < nr_pages; i++) {
-		struct page *page = pages[i];
+	for (i = 0; i < pvec->nr; i++) {
+		struct page *page = pvec->pages[i];
 		struct pglist_data *pagepgdat = page_pgdat(page);
 
 		pgscanned++;
@@ -4233,4 +4233,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 		spin_unlock_irq(&pgdat->lru_lock);
 	}
 }
-#endif /* CONFIG_SHMEM */
+EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
-- 
2.19.1.930.g4563a0d9d0-goog

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev5)
  2018-11-06  9:30 ` Kuo-Hsin Yang
  (?)
@ 2018-11-06  9:38 ` Patchwork
  -1 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2018-11-06  9:38 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev5)
URL   : https://patchwork.freedesktop.org/series/25337/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
0b49765ac12e mm, drm/i915: mark pinned shmemfs pages as unevictable
-:153: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#153: FILE: include/linux/swap.h:374:
+extern void check_move_unevictable_pages(struct pagevec *pvec);

total: 0 errors, 0 warnings, 1 checks, 160 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.BAT: failure for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev5)
  2018-11-06  9:30 ` Kuo-Hsin Yang
  (?)
  (?)
@ 2018-11-06 10:10 ` Patchwork
  -1 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2018-11-06 10:10 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev5)
URL   : https://patchwork.freedesktop.org/series/25337/
State : failure

== Summary ==

= CI Bug Log - changes from CI_DRM_5090 -> Patchwork_10735 =

== Summary - FAILURE ==

  Serious unknown changes coming with Patchwork_10735 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_10735, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://patchwork.freedesktop.org/api/1.0/series/25337/revisions/5/mbox/

== Possible new issues ==

  Here are the unknown changes that may have been introduced in Patchwork_10735:

  === IGT changes ===

    ==== Possible regressions ====

    igt@drv_selftest@live_contexts:
      fi-kbl-7560u:       PASS -> DMESG-FAIL

    igt@kms_chamelium@common-hpd-after-suspend:
      fi-skl-6700k2:      PASS -> WARN

    
== Known issues ==

  Here are the changes found in Patchwork_10735 that come from known issues:

  === IGT changes ===

    ==== Issues hit ====

    igt@drv_selftest@live_contexts:
      fi-icl-u:           NOTRUN -> INCOMPLETE (fdo#108315, fdo#108535)

    igt@pm_rpm@basic-pci-d3-state:
      fi-glk-j4005:       PASS -> DMESG-WARN (fdo#106097)

    igt@pm_rpm@module-reload:
      fi-glk-j4005:       PASS -> DMESG-WARN (fdo#107726)

    
    ==== Possible fixes ====

    igt@gem_exec_suspend@basic-s4-devices:
      fi-blb-e6850:       INCOMPLETE (fdo#107718) -> PASS

    igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a:
      fi-snb-2520m:       DMESG-FAIL (fdo#103713) -> PASS

    igt@kms_pipe_crc_basic@nonblocking-crc-pipe-a-frame-sequence:
      fi-snb-2520m:       INCOMPLETE (fdo#103713) -> PASS

    
  fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
  fdo#106097 https://bugs.freedesktop.org/show_bug.cgi?id=106097
  fdo#107718 https://bugs.freedesktop.org/show_bug.cgi?id=107718
  fdo#107726 https://bugs.freedesktop.org/show_bug.cgi?id=107726
  fdo#108315 https://bugs.freedesktop.org/show_bug.cgi?id=108315
  fdo#108535 https://bugs.freedesktop.org/show_bug.cgi?id=108535


== Participating hosts (50 -> 46) ==

  Additional (2): fi-icl-u fi-pnv-d510 
  Missing    (6): fi-ilk-m540 fi-hsw-4200u fi-skl-guc fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 


== Build changes ==

    * Linux: CI_DRM_5090 -> Patchwork_10735

  CI_DRM_5090: 756a0fd616c3ea0486f5c239f7801f71303ff389 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_4709: 15dff9353621d0746b80fae534c20621e03a9f01 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_10735: 0b49765ac12e7e27a066420b909c3f2485ec5abc @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

0b49765ac12e mm, drm/i915: mark pinned shmemfs pages as unevictable

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_10735/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06  9:30 ` Kuo-Hsin Yang
  (?)
@ 2018-11-06 10:54   ` Daniel Vetter
  -1 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2018-11-06 10:54 UTC (permalink / raw)
  To: Kuo-Hsin Yang
  Cc: linux-kernel, intel-gfx, linux-mm, Chris Wilson, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton, Dave Hansen, Michal Hocko

On Tue, Nov 06, 2018 at 05:30:59PM +0800, Kuo-Hsin Yang wrote:
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part

There was ages ago some planes to have our own i915fs, so that we could
overwrite the address_space hooks for page migration and eviction and that
sort of thing, which would make all these pages evictable. Atm you have to
ĥope our shrinker drops them on the floor, which I think is fairly
confusing to core mm code (it's kinda like page eviction worked way back
before rmaps).

Just an side really.
-Daniel

> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>  	Query the address space, and return true if it is completely
>  	unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>  	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +	if (pagevec_count(pvec)) {
> +		check_move_unevictable_pages(pvec);
> +		__pagevec_release(pvec);
> +	}
> +}
> +
>  static void
>  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  			      struct sg_table *pages)
>  {
>  	struct sgt_iter sgt_iter;
>  	struct page *page;
> +	struct pagevec pvec;
>  
>  	__i915_gem_object_release_shmem(obj, pages, true);
>  
> @@ -2395,6 +2408,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  	if (i915_gem_object_needs_bit17_swizzle(obj))
>  		i915_gem_object_save_bit_17_swizzle(obj, pages);
>  
> +	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
> +
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, pages) {
>  		if (obj->mm.dirty)
>  			set_page_dirty(page);
> @@ -2402,8 +2418,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  		if (obj->mm.madv == I915_MADV_WILLNEED)
>  			mark_page_accessed(page);
>  
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
>  	}
> +	check_release_pagevec(&pvec);
>  	obj->mm.dirty = false;
>  
>  	sg_free_table(pages);
> @@ -2526,6 +2544,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	unsigned int sg_page_sizes;
>  	gfp_t noreclaim;
>  	int ret;
> +	struct pagevec pvec;
>  
>  	/*
>  	 * Assert that the object is not currently in any GPU domain. As it
> @@ -2559,6 +2578,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	 * Fail silently without starting the shrinker
>  	 */
>  	mapping = obj->base.filp->f_mapping;
> +	mapping_set_unevictable(mapping);
>  	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
>  	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
>  
> @@ -2673,8 +2693,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  err_sg:
>  	sg_mark_end(sg);
>  err_pages:
> +	mapping_clear_unevictable(mapping);
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, st)
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
> +	check_release_pagevec(&pvec);
>  	sg_free_table(st);
>  	kfree(st);
>  
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index d8a07a4f171d..a8f6d5d89524 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -18,6 +18,8 @@ struct notifier_block;
>  
>  struct bio;
>  
> +struct pagevec;
> +
>  #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
>  #define SWAP_FLAG_PRIO_MASK	0x7fff
>  #define SWAP_FLAG_PRIO_SHIFT	0
> @@ -369,7 +371,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
>  #endif
>  
>  extern int page_evictable(struct page *page);
> -extern void check_move_unevictable_pages(struct page **, int nr_pages);
> +extern void check_move_unevictable_pages(struct pagevec *pvec);
>  
>  extern int kswapd_run(int nid);
>  extern void kswapd_stop(int nid);
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ea26d7a0342d..de4893c904a3 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -756,7 +756,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
>  			break;
>  		index = indices[pvec.nr - 1] + 1;
>  		pagevec_remove_exceptionals(&pvec);
> -		check_move_unevictable_pages(pvec.pages, pvec.nr);
> +		check_move_unevictable_pages(&pvec);
>  		pagevec_release(&pvec);
>  		cond_resched();
>  	}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 62ac0c488624..d070f431ff19 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -50,6 +50,7 @@
>  #include <linux/printk.h>
>  #include <linux/dax.h>
>  #include <linux/psi.h>
> +#include <linux/pagevec.h>
>  
>  #include <asm/tlbflush.h>
>  #include <asm/div64.h>
> @@ -4182,17 +4183,16 @@ int page_evictable(struct page *page)
>  	return ret;
>  }
>  
> -#ifdef CONFIG_SHMEM
>  /**
> - * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
> - * @pages:	array of pages to check
> - * @nr_pages:	number of pages to check
> + * check_move_unevictable_pages - check pages for evictability and move to
> + * appropriate zone lru list
> + * @pvec: pagevec with lru pages to check
>   *
> - * Checks pages for evictability and moves them to the appropriate lru list.
> - *
> - * This function is only used for SysV IPC SHM_UNLOCK.
> + * Checks pages for evictability, if an evictable page is in the unevictable
> + * lru list, moves it to the appropriate evictable lru list. This function
> + * should be only used for lru pages.
>   */
> -void check_move_unevictable_pages(struct page **pages, int nr_pages)
> +void check_move_unevictable_pages(struct pagevec *pvec)
>  {
>  	struct lruvec *lruvec;
>  	struct pglist_data *pgdat = NULL;
> @@ -4200,8 +4200,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  	int pgrescued = 0;
>  	int i;
>  
> -	for (i = 0; i < nr_pages; i++) {
> -		struct page *page = pages[i];
> +	for (i = 0; i < pvec->nr; i++) {
> +		struct page *page = pvec->pages[i];
>  		struct pglist_data *pagepgdat = page_pgdat(page);
>  
>  		pgscanned++;
> @@ -4233,4 +4233,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  		spin_unlock_irq(&pgdat->lru_lock);
>  	}
>  }
> -#endif /* CONFIG_SHMEM */
> +EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
> -- 
> 2.19.1.930.g4563a0d9d0-goog
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 10:54   ` Daniel Vetter
  0 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2018-11-06 10:54 UTC (permalink / raw)
  To: Kuo-Hsin Yang
  Cc: linux-kernel, intel-gfx, linux-mm, Chris Wilson, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton, Dave Hansen, Michal Hocko

On Tue, Nov 06, 2018 at 05:30:59PM +0800, Kuo-Hsin Yang wrote:
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part

There was ages ago some planes to have our own i915fs, so that we could
overwrite the address_space hooks for page migration and eviction and that
sort of thing, which would make all these pages evictable. Atm you have to
AJPYope our shrinker drops them on the floor, which I think is fairly
confusing to core mm code (it's kinda like page eviction worked way back
before rmaps).

Just an side really.
-Daniel

> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>  	Query the address space, and return true if it is completely
>  	unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>  	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +	if (pagevec_count(pvec)) {
> +		check_move_unevictable_pages(pvec);
> +		__pagevec_release(pvec);
> +	}
> +}
> +
>  static void
>  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  			      struct sg_table *pages)
>  {
>  	struct sgt_iter sgt_iter;
>  	struct page *page;
> +	struct pagevec pvec;
>  
>  	__i915_gem_object_release_shmem(obj, pages, true);
>  
> @@ -2395,6 +2408,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  	if (i915_gem_object_needs_bit17_swizzle(obj))
>  		i915_gem_object_save_bit_17_swizzle(obj, pages);
>  
> +	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
> +
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, pages) {
>  		if (obj->mm.dirty)
>  			set_page_dirty(page);
> @@ -2402,8 +2418,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  		if (obj->mm.madv == I915_MADV_WILLNEED)
>  			mark_page_accessed(page);
>  
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
>  	}
> +	check_release_pagevec(&pvec);
>  	obj->mm.dirty = false;
>  
>  	sg_free_table(pages);
> @@ -2526,6 +2544,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	unsigned int sg_page_sizes;
>  	gfp_t noreclaim;
>  	int ret;
> +	struct pagevec pvec;
>  
>  	/*
>  	 * Assert that the object is not currently in any GPU domain. As it
> @@ -2559,6 +2578,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	 * Fail silently without starting the shrinker
>  	 */
>  	mapping = obj->base.filp->f_mapping;
> +	mapping_set_unevictable(mapping);
>  	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
>  	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
>  
> @@ -2673,8 +2693,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  err_sg:
>  	sg_mark_end(sg);
>  err_pages:
> +	mapping_clear_unevictable(mapping);
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, st)
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
> +	check_release_pagevec(&pvec);
>  	sg_free_table(st);
>  	kfree(st);
>  
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index d8a07a4f171d..a8f6d5d89524 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -18,6 +18,8 @@ struct notifier_block;
>  
>  struct bio;
>  
> +struct pagevec;
> +
>  #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
>  #define SWAP_FLAG_PRIO_MASK	0x7fff
>  #define SWAP_FLAG_PRIO_SHIFT	0
> @@ -369,7 +371,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
>  #endif
>  
>  extern int page_evictable(struct page *page);
> -extern void check_move_unevictable_pages(struct page **, int nr_pages);
> +extern void check_move_unevictable_pages(struct pagevec *pvec);
>  
>  extern int kswapd_run(int nid);
>  extern void kswapd_stop(int nid);
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ea26d7a0342d..de4893c904a3 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -756,7 +756,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
>  			break;
>  		index = indices[pvec.nr - 1] + 1;
>  		pagevec_remove_exceptionals(&pvec);
> -		check_move_unevictable_pages(pvec.pages, pvec.nr);
> +		check_move_unevictable_pages(&pvec);
>  		pagevec_release(&pvec);
>  		cond_resched();
>  	}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 62ac0c488624..d070f431ff19 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -50,6 +50,7 @@
>  #include <linux/printk.h>
>  #include <linux/dax.h>
>  #include <linux/psi.h>
> +#include <linux/pagevec.h>
>  
>  #include <asm/tlbflush.h>
>  #include <asm/div64.h>
> @@ -4182,17 +4183,16 @@ int page_evictable(struct page *page)
>  	return ret;
>  }
>  
> -#ifdef CONFIG_SHMEM
>  /**
> - * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
> - * @pages:	array of pages to check
> - * @nr_pages:	number of pages to check
> + * check_move_unevictable_pages - check pages for evictability and move to
> + * appropriate zone lru list
> + * @pvec: pagevec with lru pages to check
>   *
> - * Checks pages for evictability and moves them to the appropriate lru list.
> - *
> - * This function is only used for SysV IPC SHM_UNLOCK.
> + * Checks pages for evictability, if an evictable page is in the unevictable
> + * lru list, moves it to the appropriate evictable lru list. This function
> + * should be only used for lru pages.
>   */
> -void check_move_unevictable_pages(struct page **pages, int nr_pages)
> +void check_move_unevictable_pages(struct pagevec *pvec)
>  {
>  	struct lruvec *lruvec;
>  	struct pglist_data *pgdat = NULL;
> @@ -4200,8 +4200,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  	int pgrescued = 0;
>  	int i;
>  
> -	for (i = 0; i < nr_pages; i++) {
> -		struct page *page = pages[i];
> +	for (i = 0; i < pvec->nr; i++) {
> +		struct page *page = pvec->pages[i];
>  		struct pglist_data *pagepgdat = page_pgdat(page);
>  
>  		pgscanned++;
> @@ -4233,4 +4233,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  		spin_unlock_irq(&pgdat->lru_lock);
>  	}
>  }
> -#endif /* CONFIG_SHMEM */
> +EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
> -- 
> 2.19.1.930.g4563a0d9d0-goog
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 10:54   ` Daniel Vetter
  0 siblings, 0 replies; 24+ messages in thread
From: Daniel Vetter @ 2018-11-06 10:54 UTC (permalink / raw)
  To: Kuo-Hsin Yang
  Cc: Michal Hocko, Dave Hansen, Peter Zijlstra, intel-gfx,
	linux-kernel, linux-mm, Andrew Morton

On Tue, Nov 06, 2018 at 05:30:59PM +0800, Kuo-Hsin Yang wrote:
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part

There was ages ago some planes to have our own i915fs, so that we could
overwrite the address_space hooks for page migration and eviction and that
sort of thing, which would make all these pages evictable. Atm you have to
ĥope our shrinker drops them on the floor, which I think is fairly
confusing to core mm code (it's kinda like page eviction worked way back
before rmaps).

Just an side really.
-Daniel

> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>  	Query the address space, and return true if it is completely
>  	unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>  	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +	if (pagevec_count(pvec)) {
> +		check_move_unevictable_pages(pvec);
> +		__pagevec_release(pvec);
> +	}
> +}
> +
>  static void
>  i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  			      struct sg_table *pages)
>  {
>  	struct sgt_iter sgt_iter;
>  	struct page *page;
> +	struct pagevec pvec;
>  
>  	__i915_gem_object_release_shmem(obj, pages, true);
>  
> @@ -2395,6 +2408,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  	if (i915_gem_object_needs_bit17_swizzle(obj))
>  		i915_gem_object_save_bit_17_swizzle(obj, pages);
>  
> +	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
> +
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, pages) {
>  		if (obj->mm.dirty)
>  			set_page_dirty(page);
> @@ -2402,8 +2418,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>  		if (obj->mm.madv == I915_MADV_WILLNEED)
>  			mark_page_accessed(page);
>  
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
>  	}
> +	check_release_pagevec(&pvec);
>  	obj->mm.dirty = false;
>  
>  	sg_free_table(pages);
> @@ -2526,6 +2544,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	unsigned int sg_page_sizes;
>  	gfp_t noreclaim;
>  	int ret;
> +	struct pagevec pvec;
>  
>  	/*
>  	 * Assert that the object is not currently in any GPU domain. As it
> @@ -2559,6 +2578,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  	 * Fail silently without starting the shrinker
>  	 */
>  	mapping = obj->base.filp->f_mapping;
> +	mapping_set_unevictable(mapping);
>  	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
>  	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
>  
> @@ -2673,8 +2693,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
>  err_sg:
>  	sg_mark_end(sg);
>  err_pages:
> +	mapping_clear_unevictable(mapping);
> +	pagevec_init(&pvec);
>  	for_each_sgt_page(page, sgt_iter, st)
> -		put_page(page);
> +		if (!pagevec_add(&pvec, page))
> +			check_release_pagevec(&pvec);
> +	check_release_pagevec(&pvec);
>  	sg_free_table(st);
>  	kfree(st);
>  
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index d8a07a4f171d..a8f6d5d89524 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -18,6 +18,8 @@ struct notifier_block;
>  
>  struct bio;
>  
> +struct pagevec;
> +
>  #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
>  #define SWAP_FLAG_PRIO_MASK	0x7fff
>  #define SWAP_FLAG_PRIO_SHIFT	0
> @@ -369,7 +371,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
>  #endif
>  
>  extern int page_evictable(struct page *page);
> -extern void check_move_unevictable_pages(struct page **, int nr_pages);
> +extern void check_move_unevictable_pages(struct pagevec *pvec);
>  
>  extern int kswapd_run(int nid);
>  extern void kswapd_stop(int nid);
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ea26d7a0342d..de4893c904a3 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -756,7 +756,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
>  			break;
>  		index = indices[pvec.nr - 1] + 1;
>  		pagevec_remove_exceptionals(&pvec);
> -		check_move_unevictable_pages(pvec.pages, pvec.nr);
> +		check_move_unevictable_pages(&pvec);
>  		pagevec_release(&pvec);
>  		cond_resched();
>  	}
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 62ac0c488624..d070f431ff19 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -50,6 +50,7 @@
>  #include <linux/printk.h>
>  #include <linux/dax.h>
>  #include <linux/psi.h>
> +#include <linux/pagevec.h>
>  
>  #include <asm/tlbflush.h>
>  #include <asm/div64.h>
> @@ -4182,17 +4183,16 @@ int page_evictable(struct page *page)
>  	return ret;
>  }
>  
> -#ifdef CONFIG_SHMEM
>  /**
> - * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
> - * @pages:	array of pages to check
> - * @nr_pages:	number of pages to check
> + * check_move_unevictable_pages - check pages for evictability and move to
> + * appropriate zone lru list
> + * @pvec: pagevec with lru pages to check
>   *
> - * Checks pages for evictability and moves them to the appropriate lru list.
> - *
> - * This function is only used for SysV IPC SHM_UNLOCK.
> + * Checks pages for evictability, if an evictable page is in the unevictable
> + * lru list, moves it to the appropriate evictable lru list. This function
> + * should be only used for lru pages.
>   */
> -void check_move_unevictable_pages(struct page **pages, int nr_pages)
> +void check_move_unevictable_pages(struct pagevec *pvec)
>  {
>  	struct lruvec *lruvec;
>  	struct pglist_data *pgdat = NULL;
> @@ -4200,8 +4200,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  	int pgrescued = 0;
>  	int i;
>  
> -	for (i = 0; i < nr_pages; i++) {
> -		struct page *page = pages[i];
> +	for (i = 0; i < pvec->nr; i++) {
> +		struct page *page = pvec->pages[i];
>  		struct pglist_data *pagepgdat = page_pgdat(page);
>  
>  		pgscanned++;
> @@ -4233,4 +4233,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
>  		spin_unlock_irq(&pgdat->lru_lock);
>  	}
>  }
> -#endif /* CONFIG_SHMEM */
> +EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
> -- 
> 2.19.1.930.g4563a0d9d0-goog
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06  9:30 ` Kuo-Hsin Yang
  (?)
@ 2018-11-06 11:06   ` Chris Wilson
  -1 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2018-11-06 11:06 UTC (permalink / raw)
  To: Kuo-Hsin Yang, intel-gfx, linux-kernel, linux-mm
  Cc: Kuo-Hsin Yang, Joonas Lahtinen, Peter Zijlstra, Andrew Morton,
	Dave Hansen, Michal Hocko

Quoting Kuo-Hsin Yang (2018-11-06 09:30:59)
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part
> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>         Query the address space, and return true if it is completely
>         unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>         invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +       if (pagevec_count(pvec)) {
> +               check_move_unevictable_pages(pvec);
> +               __pagevec_release(pvec);

This gave disappointing syslatency results until I put a cond_resched()
here and moved the one in put_pages_gtt to before the page alloc, see
https://patchwork.freedesktop.org/patch/260332/

The last really nasty wart for syslatency is the spin in
i915_gem_shrinker, for which I'm investigating
https://patchwork.freedesktop.org/patch/260365/

All 3 patches together give very reasonable syslatency results! (So
good that it's time to find a new worst case scenario!)

The challenge for the patch as it stands, is who lands it? We can take
it through drm-intel (for merging in 4.21) but need Andrew's ack on top
of all to agree with that path. Or we split the patch and only land the
i915 portion once we backmerge the mm tree. I think pushing the i915
portion through the mm tree is going to cause the most conflicts, so
would recommend against that.
-Chris

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 11:06   ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2018-11-06 11:06 UTC (permalink / raw)
  To: Kuo-Hsin Yang, intel-gfx, linux-kernel, linux-mm
  Cc: Joonas Lahtinen, Peter Zijlstra, Andrew Morton, Dave Hansen,
	Michal Hocko

Quoting Kuo-Hsin Yang (2018-11-06 09:30:59)
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part
> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>         Query the address space, and return true if it is completely
>         unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>         invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +       if (pagevec_count(pvec)) {
> +               check_move_unevictable_pages(pvec);
> +               __pagevec_release(pvec);

This gave disappointing syslatency results until I put a cond_resched()
here and moved the one in put_pages_gtt to before the page alloc, see
https://patchwork.freedesktop.org/patch/260332/

The last really nasty wart for syslatency is the spin in
i915_gem_shrinker, for which I'm investigating
https://patchwork.freedesktop.org/patch/260365/

All 3 patches together give very reasonable syslatency results! (So
good that it's time to find a new worst case scenario!)

The challenge for the patch as it stands, is who lands it? We can take
it through drm-intel (for merging in 4.21) but need Andrew's ack on top
of all to agree with that path. Or we split the patch and only land the
i915 portion once we backmerge the mm tree. I think pushing the i915
portion through the mm tree is going to cause the most conflicts, so
would recommend against that.
-Chris

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 11:06   ` Chris Wilson
  0 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2018-11-06 11:06 UTC (permalink / raw)
  To: intel-gfx, linux-kernel, linux-mm
  Cc: Kuo-Hsin Yang, Joonas Lahtinen, Peter Zijlstra, Andrew Morton,
	Dave Hansen, Michal Hocko

Quoting Kuo-Hsin Yang (2018-11-06 09:30:59)
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part
> ---
> Changes for v6:
>  Tweak the acked-by.
> 
> Changes for v5:
>  Modify doc and comments. Remove the ifdef surrounding
>  check_move_unevictable_pages.
> 
> Changes for v4:
>  Export pagevec API check_move_unevictable_pages().
> 
> Changes for v3:
>  Use check_move_lru_page instead of shmem_unlock_mapping to move pages
>  to appropriate lru lists.
> 
> Changes for v2:
>  Squashed the two patches.
> 
>  Documentation/vm/unevictable-lru.rst |  6 +++++-
>  drivers/gpu/drm/i915/i915_gem.c      | 28 ++++++++++++++++++++++++++--
>  include/linux/swap.h                 |  4 +++-
>  mm/shmem.c                           |  2 +-
>  mm/vmscan.c                          | 22 +++++++++++-----------
>  5 files changed, 46 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
> index fdd84cb8d511..b8e29f977f2d 100644
> --- a/Documentation/vm/unevictable-lru.rst
> +++ b/Documentation/vm/unevictable-lru.rst
> @@ -143,7 +143,7 @@ using a number of wrapper functions:
>         Query the address space, and return true if it is completely
>         unevictable.
>  
> -These are currently used in two places in the kernel:
> +These are currently used in three places in the kernel:
>  
>   (1) By ramfs to mark the address spaces of its inodes when they are created,
>       and this mark remains for the life of the inode.
> @@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
>       swapped out; the application must touch the pages manually if it wants to
>       ensure they're in memory.
>  
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.
> +
>  
>  Detecting Unevictable Pages
>  ---------------------------
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 0c8aa57ce83b..c620891e0d02 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2381,12 +2381,25 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>         invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**
> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */
> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +       if (pagevec_count(pvec)) {
> +               check_move_unevictable_pages(pvec);
> +               __pagevec_release(pvec);

This gave disappointing syslatency results until I put a cond_resched()
here and moved the one in put_pages_gtt to before the page alloc, see
https://patchwork.freedesktop.org/patch/260332/

The last really nasty wart for syslatency is the spin in
i915_gem_shrinker, for which I'm investigating
https://patchwork.freedesktop.org/patch/260365/

All 3 patches together give very reasonable syslatency results! (So
good that it's time to find a new worst case scenario!)

The challenge for the patch as it stands, is who lands it? We can take
it through drm-intel (for merging in 4.21) but need Andrew's ack on top
of all to agree with that path. Or we split the patch and only land the
i915 portion once we backmerge the mm tree. I think pushing the i915
portion through the mm tree is going to cause the most conflicts, so
would recommend against that.
-Chris

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 11:06   ` Chris Wilson
  (?)
  (?)
@ 2018-11-06 11:49   ` Kuo-Hsin Yang
  -1 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06 11:49 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, linux-kernel, linux-mm, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton, Dave Hansen, Michal Hocko

On Tue, Nov 6, 2018 at 7:07 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> This gave disappointing syslatency results until I put a cond_resched()
> here and moved the one in put_pages_gtt to before the page alloc, see
> https://patchwork.freedesktop.org/patch/260332/
>
> The last really nasty wart for syslatency is the spin in
> i915_gem_shrinker, for which I'm investigating
> https://patchwork.freedesktop.org/patch/260365/
>
> All 3 patches together give very reasonable syslatency results! (So
> good that it's time to find a new worst case scenario!)
>
> The challenge for the patch as it stands, is who lands it? We can take
> it through drm-intel (for merging in 4.21) but need Andrew's ack on top
> of all to agree with that path. Or we split the patch and only land the
> i915 portion once we backmerge the mm tree. I think pushing the i915
> portion through the mm tree is going to cause the most conflicts, so
> would recommend against that.

Splitting the patch and landing the mm part first sounds reasonable to me.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 11:06   ` Chris Wilson
@ 2018-11-06 12:14     ` Michal Hocko
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2018-11-06 12:14 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Kuo-Hsin Yang, intel-gfx, linux-kernel, linux-mm,
	Joonas Lahtinen, Peter Zijlstra, Andrew Morton, Dave Hansen

On Tue 06-11-18 11:06:58, Chris Wilson wrote:
[...]
> The challenge for the patch as it stands, is who lands it? We can take
> it through drm-intel (for merging in 4.21) but need Andrew's ack on top
> of all to agree with that path. Or we split the patch and only land the
> i915 portion once we backmerge the mm tree. I think pushing the i915
> portion through the mm tree is going to cause the most conflicts, so
> would recommend against that.

I usually prefer new exports to go along with their users. I am pretty
sure that the core mm change can be routed via whatever tree needs that.
Up to Andrew but this doesn't seem to be conflicting with anything that
is going on in MM.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 12:14     ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2018-11-06 12:14 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Dave Hansen, Peter Zijlstra, intel-gfx, linux-kernel, linux-mm,
	Andrew Morton

On Tue 06-11-18 11:06:58, Chris Wilson wrote:
[...]
> The challenge for the patch as it stands, is who lands it? We can take
> it through drm-intel (for merging in 4.21) but need Andrew's ack on top
> of all to agree with that path. Or we split the patch and only land the
> i915 portion once we backmerge the mm tree. I think pushing the i915
> portion through the mm tree is going to cause the most conflicts, so
> would recommend against that.

I usually prefer new exports to go along with their users. I am pretty
sure that the core mm change can be routed via whatever tree needs that.
Up to Andrew but this doesn't seem to be conflicting with anything that
is going on in MM.
-- 
Michal Hocko
SUSE Labs
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06  9:30 ` Kuo-Hsin Yang
                   ` (4 preceding siblings ...)
  (?)
@ 2018-11-06 13:23 ` Chris Wilson
  2018-11-06 14:14     ` Kuo-Hsin Yang
                     ` (2 more replies)
  -1 siblings, 3 replies; 24+ messages in thread
From: Chris Wilson @ 2018-11-06 13:23 UTC (permalink / raw)
  To: intel-gfx
  Cc: linux-mm, linux-kernel, Kuo-Hsin Yang, Chris Wilson,
	Joonas Lahtinen, Peter Zijlstra, Andrew Morton, Dave Hansen

From: Kuo-Hsin Yang <vovoy@chromium.org>

The i915 driver uses shmemfs to allocate backing storage for gem
objects. These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. In some extreme case,
all pages in the inactive anon lru are pinned, and only the inactive
anon lru is scanned due to inactive_ratio, the system cannot swap and
invokes the oom-killer. Mark these pinned pages as unevictable to speed
up vmscan.

Export pagevec API check_move_unevictable_pages().

This patch was inspired by Chris Wilson's change [1].

[1]: https://patchwork.kernel.org/patch/9768741/

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com> # mm part
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
---
Rebased on drm-intel-next-queued to pick up a cond_resched()
-Chris
---
 Documentation/vm/unevictable-lru.rst |  6 +++++-
 drivers/gpu/drm/i915/i915_gem.c      | 30 +++++++++++++++++++++++++---
 include/linux/swap.h                 |  4 +++-
 mm/shmem.c                           |  2 +-
 mm/vmscan.c                          | 22 ++++++++++----------
 5 files changed, 47 insertions(+), 17 deletions(-)

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..b8e29f977f2d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@ using a number of wrapper functions:
 	Query the address space, and return true if it is completely
 	unevictable.
 
-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:
 
  (1) By ramfs to mark the address spaces of its inodes when they are created,
      and this mark remains for the life of the inode.
@@ -154,6 +154,10 @@ These are currently used in two places in the kernel:
      swapped out; the application must touch the pages manually if it wants to
      ensure they're in memory.
 
+ (3) By the i915 driver to mark pinned address space until it's unpinned. The
+     amount of unevictable memory marked by i915 driver is roughly the bounded
+     object size in debugfs/dri/0/i915_gem_objects.
+
 
 Detecting Unevictable Pages
 ---------------------------
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 347b3836c809..1c09d3e93c21 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2382,12 +2382,26 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
 	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
 }
 
+/**
+ * Move pages to appropriate lru and release the pagevec. Decrement the ref
+ * count of these pages.
+ */
+static inline void check_release_pagevec(struct pagevec *pvec)
+{
+	if (pagevec_count(pvec)) {
+		check_move_unevictable_pages(pvec);
+		__pagevec_release(pvec);
+		cond_resched();
+	}
+}
+
 static void
 i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 			      struct sg_table *pages)
 {
 	struct sgt_iter sgt_iter;
 	struct page *page;
+	struct pagevec pvec;
 
 	__i915_gem_object_release_shmem(obj, pages, true);
 
@@ -2396,6 +2410,9 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 	if (i915_gem_object_needs_bit17_swizzle(obj))
 		i915_gem_object_save_bit_17_swizzle(obj, pages);
 
+	mapping_clear_unevictable(file_inode(obj->base.filp)->i_mapping);
+
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, pages) {
 		if (obj->mm.dirty)
 			set_page_dirty(page);
@@ -2403,9 +2420,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
 		if (obj->mm.madv == I915_MADV_WILLNEED)
 			mark_page_accessed(page);
 
-		put_page(page);
-		cond_resched();
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
 	}
+	check_release_pagevec(&pvec);
 	obj->mm.dirty = false;
 
 	sg_free_table(pages);
@@ -2528,6 +2546,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	unsigned int sg_page_sizes;
 	gfp_t noreclaim;
 	int ret;
+	struct pagevec pvec;
 
 	/*
 	 * Assert that the object is not currently in any GPU domain. As it
@@ -2561,6 +2580,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	 * Fail silently without starting the shrinker
 	 */
 	mapping = obj->base.filp->f_mapping;
+	mapping_set_unevictable(mapping);
 	noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
 	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
 
@@ -2675,8 +2695,12 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 err_sg:
 	sg_mark_end(sg);
 err_pages:
+	mapping_clear_unevictable(mapping);
+	pagevec_init(&pvec);
 	for_each_sgt_page(page, sgt_iter, st)
-		put_page(page);
+		if (!pagevec_add(&pvec, page))
+			check_release_pagevec(&pvec);
+	check_release_pagevec(&pvec);
 	sg_free_table(st);
 	kfree(st);
 
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 8e2c11e692ba..6c95df96c9aa 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -18,6 +18,8 @@ struct notifier_block;
 
 struct bio;
 
+struct pagevec;
+
 #define SWAP_FLAG_PREFER	0x8000	/* set if swap priority specified */
 #define SWAP_FLAG_PRIO_MASK	0x7fff
 #define SWAP_FLAG_PRIO_SHIFT	0
@@ -373,7 +375,7 @@ static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask,
 #endif
 
 extern int page_evictable(struct page *page);
-extern void check_move_unevictable_pages(struct page **, int nr_pages);
+extern void check_move_unevictable_pages(struct pagevec *pvec);
 
 extern int kswapd_run(int nid);
 extern void kswapd_stop(int nid);
diff --git a/mm/shmem.c b/mm/shmem.c
index 446942677cd4..0c3b005a59eb 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -781,7 +781,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
 			break;
 		index = indices[pvec.nr - 1] + 1;
 		pagevec_remove_exceptionals(&pvec);
-		check_move_unevictable_pages(pvec.pages, pvec.nr);
+		check_move_unevictable_pages(&pvec);
 		pagevec_release(&pvec);
 		cond_resched();
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c7ce2c161225..0dbc493026a2 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -46,6 +46,7 @@
 #include <linux/delayacct.h>
 #include <linux/sysctl.h>
 #include <linux/oom.h>
+#include <linux/pagevec.h>
 #include <linux/prefetch.h>
 #include <linux/printk.h>
 #include <linux/dax.h>
@@ -4162,17 +4163,16 @@ int page_evictable(struct page *page)
 	return ret;
 }
 
-#ifdef CONFIG_SHMEM
 /**
- * check_move_unevictable_pages - check pages for evictability and move to appropriate zone lru list
- * @pages:	array of pages to check
- * @nr_pages:	number of pages to check
+ * check_move_unevictable_pages - check pages for evictability and move to
+ * appropriate zone lru list
+ * @pvec: pagevec with lru pages to check
  *
- * Checks pages for evictability and moves them to the appropriate lru list.
- *
- * This function is only used for SysV IPC SHM_UNLOCK.
+ * Checks pages for evictability, if an evictable page is in the unevictable
+ * lru list, moves it to the appropriate evictable lru list. This function
+ * should be only used for lru pages.
  */
-void check_move_unevictable_pages(struct page **pages, int nr_pages)
+void check_move_unevictable_pages(struct pagevec *pvec)
 {
 	struct lruvec *lruvec;
 	struct pglist_data *pgdat = NULL;
@@ -4180,8 +4180,8 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 	int pgrescued = 0;
 	int i;
 
-	for (i = 0; i < nr_pages; i++) {
-		struct page *page = pages[i];
+	for (i = 0; i < pvec->nr; i++) {
+		struct page *page = pvec->pages[i];
 		struct pglist_data *pagepgdat = page_pgdat(page);
 
 		pgscanned++;
@@ -4213,4 +4213,4 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 		spin_unlock_irq(&pgdat->lru_lock);
 	}
 }
-#endif /* CONFIG_SHMEM */
+EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
  2018-11-06  9:30 ` Kuo-Hsin Yang
                   ` (5 preceding siblings ...)
  (?)
@ 2018-11-06 13:53 ` Patchwork
  -1 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2018-11-06 13:53 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
URL   : https://patchwork.freedesktop.org/series/25337/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
98590d8d7b8c mm, drm/i915: mark pinned shmemfs pages as unevictable
-:156: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#156: FILE: include/linux/swap.h:374:
+extern void check_move_unevictable_pages(struct pagevec *pvec);

total: 0 errors, 0 warnings, 1 checks, 162 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 13:23 ` [PATCH v7] " Chris Wilson
@ 2018-11-06 14:14     ` Kuo-Hsin Yang
  2018-11-06 17:32   ` Dave Hansen
  2018-11-06 18:12     ` Andrew Morton
  2 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06 14:14 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, linux-mm, linux-kernel, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton, Dave Hansen

On Tue, Nov 6, 2018 at 9:23 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Thanks for your fixes and review.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 14:14     ` Kuo-Hsin Yang
  0 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06 14:14 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Dave Hansen, Peter Zijlstra, intel-gfx, linux-kernel, linux-mm,
	Andrew Morton

On Tue, Nov 6, 2018 at 9:23 PM Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Signed-off-by: Kuo-Hsin Yang <vovoy@chromium.org>
> Acked-by: Michal Hocko <mhocko@suse.com> # mm part
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>

Thanks for your fixes and review.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* ✓ Fi.CI.BAT: success for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
  2018-11-06  9:30 ` Kuo-Hsin Yang
                   ` (6 preceding siblings ...)
  (?)
@ 2018-11-06 14:14 ` Patchwork
  -1 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2018-11-06 14:14 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
URL   : https://patchwork.freedesktop.org/series/25337/
State : success

== Summary ==

= CI Bug Log - changes from CI_DRM_5094 -> Patchwork_10736 =

== Summary - SUCCESS ==

  No regressions found.

  External URL: https://patchwork.freedesktop.org/api/1.0/series/25337/revisions/6/mbox/

== Known issues ==

  Here are the changes found in Patchwork_10736 that come from known issues:

  === IGT changes ===

    ==== Issues hit ====

    igt@drv_module_reload@basic-reload:
      fi-blb-e6850:       PASS -> INCOMPLETE (fdo#107718)

    igt@drv_selftest@live_coherency:
      fi-gdg-551:         PASS -> DMESG-FAIL (fdo#107164)

    igt@kms_cursor_legacy@basic-flip-before-cursor-varying-size:
      fi-bsw-n3050:       PASS -> DMESG-WARN (fdo#106207) +22

    igt@kms_frontbuffer_tracking@basic:
      fi-byt-clapper:     PASS -> FAIL (fdo#103167)

    igt@kms_pipe_crc_basic@hang-read-crc-pipe-a:
      fi-snb-2520m:       PASS -> DMESG-FAIL (fdo#103713)

    igt@kms_pipe_crc_basic@hang-read-crc-pipe-b:
      fi-snb-2520m:       PASS -> INCOMPLETE (fdo#103713)

    igt@kms_pipe_crc_basic@read-crc-pipe-a-frame-sequence:
      fi-byt-clapper:     PASS -> FAIL (fdo#107362, fdo#103191)

    
    ==== Possible fixes ====

    igt@gem_ctx_create@basic-files:
      fi-bsw-kefka:       FAIL (fdo#108656) -> PASS

    igt@kms_frontbuffer_tracking@basic:
      fi-hsw-peppy:       DMESG-WARN (fdo#102614) -> PASS

    igt@kms_pipe_crc_basic@read-crc-pipe-a:
      fi-byt-clapper:     FAIL (fdo#107362) -> PASS

    igt@pm_rpm@module-reload:
      fi-glk-j4005:       DMESG-WARN (fdo#106000) -> PASS

    
  fdo#102614 https://bugs.freedesktop.org/show_bug.cgi?id=102614
  fdo#103167 https://bugs.freedesktop.org/show_bug.cgi?id=103167
  fdo#103191 https://bugs.freedesktop.org/show_bug.cgi?id=103191
  fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
  fdo#106000 https://bugs.freedesktop.org/show_bug.cgi?id=106000
  fdo#106207 https://bugs.freedesktop.org/show_bug.cgi?id=106207
  fdo#107164 https://bugs.freedesktop.org/show_bug.cgi?id=107164
  fdo#107362 https://bugs.freedesktop.org/show_bug.cgi?id=107362
  fdo#107718 https://bugs.freedesktop.org/show_bug.cgi?id=107718
  fdo#108656 https://bugs.freedesktop.org/show_bug.cgi?id=108656


== Participating hosts (53 -> 47) ==

  Missing    (6): fi-kbl-soraka fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 


== Build changes ==

    * Linux: CI_DRM_5094 -> Patchwork_10736

  CI_DRM_5094: ad5b2d7213c64cbaa5837e45757011af9b3aa366 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_4710: 431f0cfa1475dcaa475d6c30610317b3467bd4e4 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_10736: 98590d8d7b8c57b86caaad4b6bbe98e73da123d7 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

98590d8d7b8c mm, drm/i915: mark pinned shmemfs pages as unevictable

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_10736/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 10:54   ` Daniel Vetter
  (?)
  (?)
@ 2018-11-06 15:19   ` Kuo-Hsin Yang
  -1 siblings, 0 replies; 24+ messages in thread
From: Kuo-Hsin Yang @ 2018-11-06 15:19 UTC (permalink / raw)
  To: linux-kernel, intel-gfx, linux-mm, Chris Wilson, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton, Dave Hansen, Michal Hocko

On Tue, Nov 6, 2018 at 6:54 PM Daniel Vetter <daniel@ffwll.ch> wrote:
> There was ages ago some planes to have our own i915fs, so that we could
> overwrite the address_space hooks for page migration and eviction and that
> sort of thing, which would make all these pages evictable. Atm you have to
> ĥope our shrinker drops them on the floor, which I think is fairly
> confusing to core mm code (it's kinda like page eviction worked way back
> before rmaps).
>

Thanks for the explanation. Your blog posts helped a lot to get me
started on hacking drm/i915 driver.

> Just an side really.
> -Daniel
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 13:23 ` [PATCH v7] " Chris Wilson
  2018-11-06 14:14     ` Kuo-Hsin Yang
@ 2018-11-06 17:32   ` Dave Hansen
  2018-11-06 18:12     ` Andrew Morton
  2 siblings, 0 replies; 24+ messages in thread
From: Dave Hansen @ 2018-11-06 17:32 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx
  Cc: linux-mm, linux-kernel, Kuo-Hsin Yang, Joonas Lahtinen,
	Peter Zijlstra, Andrew Morton

On 11/6/18 5:23 AM, Chris Wilson wrote:
> + (3) By the i915 driver to mark pinned address space until it's unpinned. The
> +     amount of unevictable memory marked by i915 driver is roughly the bounded
> +     object size in debugfs/dri/0/i915_gem_objects.

Thanks for adding this.  Feel free to add my:

Acked-by: Dave Hansen <dave.hansen@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 13:23 ` [PATCH v7] " Chris Wilson
@ 2018-11-06 18:12     ` Andrew Morton
  2018-11-06 17:32   ` Dave Hansen
  2018-11-06 18:12     ` Andrew Morton
  2 siblings, 0 replies; 24+ messages in thread
From: Andrew Morton @ 2018-11-06 18:12 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, linux-mm, linux-kernel, Kuo-Hsin Yang,
	Joonas Lahtinen, Peter Zijlstra, Dave Hansen

On Tue,  6 Nov 2018 13:23:24 +0000 Chris Wilson <chris@chris-wilson.co.uk> wrote:

> From: Kuo-Hsin Yang <vovoy@chromium.org>
> 
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> ...
>
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2382,12 +2382,26 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>  	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**

This token is used to introduce a kerneldoc comment.

> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */

But this isn't a kerneldoc comment.

At least, I don't think it is.  Maybe the parser got smarter when I
wasn't looking.

> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +	if (pagevec_count(pvec)) {
> +		check_move_unevictable_pages(pvec);
> +		__pagevec_release(pvec);
> +		cond_resched();
> +	}
> +}

This looks too large to be inlined and the compiler will ignore the
`inline' anyway.


Otherwise, Acked-by: Andrew Morton <akpm@linux-foundation.org>.  Please
go ahead and merge via the appropriate drm tree.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
@ 2018-11-06 18:12     ` Andrew Morton
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Morton @ 2018-11-06 18:12 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Dave Hansen, Peter Zijlstra, intel-gfx, linux-kernel, linux-mm

On Tue,  6 Nov 2018 13:23:24 +0000 Chris Wilson <chris@chris-wilson.co.uk> wrote:

> From: Kuo-Hsin Yang <vovoy@chromium.org>
> 
> The i915 driver uses shmemfs to allocate backing storage for gem
> objects. These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. In some extreme case,
> all pages in the inactive anon lru are pinned, and only the inactive
> anon lru is scanned due to inactive_ratio, the system cannot swap and
> invokes the oom-killer. Mark these pinned pages as unevictable to speed
> up vmscan.
> 
> Export pagevec API check_move_unevictable_pages().
> 
> This patch was inspired by Chris Wilson's change [1].
> 
> [1]: https://patchwork.kernel.org/patch/9768741/
> 
> ...
>
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2382,12 +2382,26 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
>  	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
>  }
>  
> +/**

This token is used to introduce a kerneldoc comment.

> + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> + * count of these pages.
> + */

But this isn't a kerneldoc comment.

At least, I don't think it is.  Maybe the parser got smarter when I
wasn't looking.

> +static inline void check_release_pagevec(struct pagevec *pvec)
> +{
> +	if (pagevec_count(pvec)) {
> +		check_move_unevictable_pages(pvec);
> +		__pagevec_release(pvec);
> +		cond_resched();
> +	}
> +}

This looks too large to be inlined and the compiler will ignore the
`inline' anyway.


Otherwise, Acked-by: Andrew Morton <akpm@linux-foundation.org>.  Please
go ahead and merge via the appropriate drm tree.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* ✗ Fi.CI.IGT: failure for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
  2018-11-06  9:30 ` Kuo-Hsin Yang
                   ` (7 preceding siblings ...)
  (?)
@ 2018-11-06 19:38 ` Patchwork
  -1 siblings, 0 replies; 24+ messages in thread
From: Patchwork @ 2018-11-06 19:38 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6)
URL   : https://patchwork.freedesktop.org/series/25337/
State : failure

== Summary ==

= CI Bug Log - changes from CI_DRM_5094_full -> Patchwork_10736_full =

== Summary - FAILURE ==

  Serious unknown changes coming with Patchwork_10736_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_10736_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

== Possible new issues ==

  Here are the unknown changes that may have been introduced in Patchwork_10736_full:

  === IGT changes ===

    ==== Possible regressions ====

    igt@gem_exec_reuse@baggage:
      shard-apl:          PASS -> DMESG-WARN

    
    ==== Warnings ====

    igt@perf_pmu@rc6:
      shard-kbl:          SKIP -> PASS

    
== Known issues ==

  Here are the changes found in Patchwork_10736_full that come from known issues:

  === IGT changes ===

    ==== Issues hit ====

    igt@drv_suspend@shrink:
      shard-snb:          PASS -> INCOMPLETE (fdo#105411, fdo#106886)

    igt@gem_ppgtt@blt-vs-render-ctx0:
      shard-apl:          PASS -> INCOMPLETE (fdo#103927)

    igt@kms_busy@extended-modeset-hang-newfb-render-a:
      shard-skl:          NOTRUN -> DMESG-WARN (fdo#107956) +2

    igt@kms_busy@extended-modeset-hang-newfb-with-reset-render-a:
      shard-kbl:          PASS -> DMESG-WARN (fdo#107956)

    igt@kms_color@pipe-b-ctm-0-5:
      shard-skl:          PASS -> FAIL (fdo#108682)

    igt@kms_cursor_crc@cursor-128x128-onscreen:
      shard-skl:          NOTRUN -> FAIL (fdo#103232)

    igt@kms_cursor_crc@cursor-128x128-suspend:
      shard-skl:          NOTRUN -> FAIL (fdo#103232, fdo#103191)

    igt@kms_cursor_crc@cursor-128x42-sliding:
      shard-glk:          PASS -> FAIL (fdo#103232)

    igt@kms_cursor_crc@cursor-256x256-random:
      shard-apl:          PASS -> FAIL (fdo#103232) +2

    igt@kms_cursor_legacy@flip-vs-cursor-atomic:
      shard-skl:          PASS -> FAIL (fdo#102670)

    igt@kms_flip@modeset-vs-vblank-race-interruptible:
      shard-skl:          PASS -> FAIL (fdo#103060)

    igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-mmap-wc:
      shard-apl:          PASS -> FAIL (fdo#103167)

    igt@kms_frontbuffer_tracking@fbc-1p-primscrn-shrfb-pgflip-blt:
      shard-skl:          NOTRUN -> FAIL (fdo#105682)

    igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-onoff:
      shard-glk:          PASS -> FAIL (fdo#103167) +1

    igt@kms_frontbuffer_tracking@fbc-1p-rte:
      shard-apl:          PASS -> FAIL (fdo#103167, fdo#105682)

    igt@kms_plane@plane-position-covered-pipe-a-planes:
      shard-glk:          PASS -> FAIL (fdo#103166) +1

    igt@kms_plane@plane-position-covered-pipe-c-planes:
      shard-apl:          PASS -> FAIL (fdo#103166) +1

    igt@kms_plane_alpha_blend@pipe-a-alpha-7efc:
      shard-skl:          NOTRUN -> FAIL (fdo#108145, fdo#107815)

    igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
      shard-skl:          PASS -> FAIL (fdo#108145, fdo#107815)

    igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
      shard-skl:          NOTRUN -> FAIL (fdo#108145)

    igt@perf@blocking:
      shard-hsw:          PASS -> FAIL (fdo#102252)

    igt@perf@short-reads:
      shard-skl:          PASS -> FAIL (fdo#103183)

    igt@pm_rpm@basic-rte:
      shard-skl:          PASS -> INCOMPLETE (fdo#107807)

    igt@prime_vgem@wait-bsd:
      shard-snb:          SKIP -> INCOMPLETE (fdo#105411)

    
    ==== Possible fixes ====

    igt@gem_tiled_swapping@non-threaded:
      shard-skl:          DMESG-WARN -> PASS

    igt@kms_atomic_transition@2x-modeset-transitions-nonblocking:
      shard-hsw:          DMESG-WARN (fdo#102614) -> PASS

    igt@kms_cursor_legacy@cursorb-vs-flipb-toggle:
      shard-glk:          DMESG-WARN (fdo#106538, fdo#105763) -> PASS

    igt@kms_flip@2x-dpms-vs-vblank-race:
      shard-hsw:          DMESG-FAIL (fdo#103060, fdo#102614) -> PASS

    igt@kms_flip@flip-vs-expired-vblank-interruptible:
      shard-skl:          FAIL (fdo#105363) -> PASS

    igt@kms_flip_tiling@flip-changes-tiling-yf:
      shard-skl:          FAIL (fdo#108303, fdo#108228) -> PASS

    igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc:
      shard-glk:          FAIL (fdo#103167) -> PASS

    igt@kms_plane_multiple@atomic-pipe-b-tiling-yf:
      shard-apl:          FAIL (fdo#103166) -> PASS

    igt@kms_setmode@basic:
      shard-kbl:          FAIL (fdo#99912) -> PASS

    igt@kms_vblank@pipe-c-ts-continuation-dpms-suspend:
      shard-skl:          INCOMPLETE (fdo#104108, fdo#107773) -> PASS

    igt@pm_rpm@cursor:
      shard-skl:          INCOMPLETE (fdo#107807) -> PASS +2

    igt@pm_rpm@gem-execbuf:
      shard-skl:          INCOMPLETE (fdo#107807, fdo#107803) -> PASS

    
  fdo#102252 https://bugs.freedesktop.org/show_bug.cgi?id=102252
  fdo#102614 https://bugs.freedesktop.org/show_bug.cgi?id=102614
  fdo#102670 https://bugs.freedesktop.org/show_bug.cgi?id=102670
  fdo#103060 https://bugs.freedesktop.org/show_bug.cgi?id=103060
  fdo#103166 https://bugs.freedesktop.org/show_bug.cgi?id=103166
  fdo#103167 https://bugs.freedesktop.org/show_bug.cgi?id=103167
  fdo#103183 https://bugs.freedesktop.org/show_bug.cgi?id=103183
  fdo#103191 https://bugs.freedesktop.org/show_bug.cgi?id=103191
  fdo#103232 https://bugs.freedesktop.org/show_bug.cgi?id=103232
  fdo#103927 https://bugs.freedesktop.org/show_bug.cgi?id=103927
  fdo#104108 https://bugs.freedesktop.org/show_bug.cgi?id=104108
  fdo#105363 https://bugs.freedesktop.org/show_bug.cgi?id=105363
  fdo#105411 https://bugs.freedesktop.org/show_bug.cgi?id=105411
  fdo#105682 https://bugs.freedesktop.org/show_bug.cgi?id=105682
  fdo#105763 https://bugs.freedesktop.org/show_bug.cgi?id=105763
  fdo#106538 https://bugs.freedesktop.org/show_bug.cgi?id=106538
  fdo#106886 https://bugs.freedesktop.org/show_bug.cgi?id=106886
  fdo#107773 https://bugs.freedesktop.org/show_bug.cgi?id=107773
  fdo#107803 https://bugs.freedesktop.org/show_bug.cgi?id=107803
  fdo#107807 https://bugs.freedesktop.org/show_bug.cgi?id=107807
  fdo#107815 https://bugs.freedesktop.org/show_bug.cgi?id=107815
  fdo#107956 https://bugs.freedesktop.org/show_bug.cgi?id=107956
  fdo#108145 https://bugs.freedesktop.org/show_bug.cgi?id=108145
  fdo#108228 https://bugs.freedesktop.org/show_bug.cgi?id=108228
  fdo#108303 https://bugs.freedesktop.org/show_bug.cgi?id=108303
  fdo#108682 https://bugs.freedesktop.org/show_bug.cgi?id=108682
  fdo#99912 https://bugs.freedesktop.org/show_bug.cgi?id=99912


== Participating hosts (6 -> 6) ==

  No changes in participating hosts


== Build changes ==

    * Linux: CI_DRM_5094 -> Patchwork_10736

  CI_DRM_5094: ad5b2d7213c64cbaa5837e45757011af9b3aa366 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_4710: 431f0cfa1475dcaa475d6c30610317b3467bd4e4 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_10736: 98590d8d7b8c57b86caaad4b6bbe98e73da123d7 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_10736/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v7] mm, drm/i915: mark pinned shmemfs pages as unevictable
  2018-11-06 18:12     ` Andrew Morton
  (?)
@ 2018-11-07 15:34     ` Chris Wilson
  -1 siblings, 0 replies; 24+ messages in thread
From: Chris Wilson @ 2018-11-07 15:34 UTC (permalink / raw)
  To: Andrew Morton
  Cc: intel-gfx, linux-mm, linux-kernel, Kuo-Hsin Yang,
	Joonas Lahtinen, Peter Zijlstra, Dave Hansen

Quoting Andrew Morton (2018-11-06 18:12:11)
> On Tue,  6 Nov 2018 13:23:24 +0000 Chris Wilson <chris@chris-wilson.co.uk> wrote:
> 
> > From: Kuo-Hsin Yang <vovoy@chromium.org>
> > 
> > The i915 driver uses shmemfs to allocate backing storage for gem
> > objects. These shmemfs pages can be pinned (increased ref count) by
> > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> > wastes a lot of time scanning these pinned pages. In some extreme case,
> > all pages in the inactive anon lru are pinned, and only the inactive
> > anon lru is scanned due to inactive_ratio, the system cannot swap and
> > invokes the oom-killer. Mark these pinned pages as unevictable to speed
> > up vmscan.
> > 
> > Export pagevec API check_move_unevictable_pages().
> > 
> > This patch was inspired by Chris Wilson's change [1].
> > 
> > [1]: https://patchwork.kernel.org/patch/9768741/
> > 
> > ...
> >
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2382,12 +2382,26 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
> >       invalidate_mapping_pages(mapping, 0, (loff_t)-1);
> >  }
> >  
> > +/**
> 
> This token is used to introduce a kerneldoc comment.
> 
> > + * Move pages to appropriate lru and release the pagevec. Decrement the ref
> > + * count of these pages.
> > + */
> 
> But this isn't a kerneldoc comment.
> 
> At least, I don't think it is.  Maybe the parser got smarter when I
> wasn't looking.
> 
> > +static inline void check_release_pagevec(struct pagevec *pvec)
> > +{
> > +     if (pagevec_count(pvec)) {
> > +             check_move_unevictable_pages(pvec);
> > +             __pagevec_release(pvec);
> > +             cond_resched();
> > +     }
> > +}
> 
> This looks too large to be inlined and the compiler will ignore the
> `inline' anyway.

Applied both corrections.

> Otherwise, Acked-by: Andrew Morton <akpm@linux-foundation.org>.  Please
> go ahead and merge via the appropriate drm tree.

Thank you, pushed to drm-intel, expected to arrive around 4.21.
-Chris

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2018-11-07 15:35 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-06  9:30 [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable Kuo-Hsin Yang
2018-11-06  9:30 ` Kuo-Hsin Yang
2018-11-06  9:38 ` ✗ Fi.CI.CHECKPATCH: warning for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev5) Patchwork
2018-11-06 10:10 ` ✗ Fi.CI.BAT: failure " Patchwork
2018-11-06 10:54 ` [PATCH v6] mm, drm/i915: mark pinned shmemfs pages as unevictable Daniel Vetter
2018-11-06 10:54   ` Daniel Vetter
2018-11-06 10:54   ` Daniel Vetter
2018-11-06 15:19   ` Kuo-Hsin Yang
2018-11-06 11:06 ` Chris Wilson
2018-11-06 11:06   ` Chris Wilson
2018-11-06 11:06   ` Chris Wilson
2018-11-06 11:49   ` Kuo-Hsin Yang
2018-11-06 12:14   ` Michal Hocko
2018-11-06 12:14     ` Michal Hocko
2018-11-06 13:23 ` [PATCH v7] " Chris Wilson
2018-11-06 14:14   ` Kuo-Hsin Yang
2018-11-06 14:14     ` Kuo-Hsin Yang
2018-11-06 17:32   ` Dave Hansen
2018-11-06 18:12   ` Andrew Morton
2018-11-06 18:12     ` Andrew Morton
2018-11-07 15:34     ` Chris Wilson
2018-11-06 13:53 ` ✗ Fi.CI.CHECKPATCH: warning for mm, drm/i915: Mark pinned shmemfs pages as unevictable (rev6) Patchwork
2018-11-06 14:14 ` ✓ Fi.CI.BAT: success " Patchwork
2018-11-06 19:38 ` ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.