All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker
@ 2017-05-24 14:39 Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 2/5] drm/i915: Allow kswapd to pause the device whilst reaping Chris Wilson
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:39 UTC (permalink / raw)
  To: intel-gfx

Having resolved whether or not we would deadlock upon a call to
mutex_lock(&dev->struct_mutex), we can then wait for the contended
struct_mutex if we are not the owner. This should significantly improve
the chance of running the shrinker for other processes whilst the GPU is
busy.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_shrinker.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index b409e67c5c72..03e08e1853e6 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -39,8 +39,8 @@ static bool shrinker_lock(struct drm_i915_private *dev_priv, bool *unlock)
 {
 	switch (mutex_trylock_recursive(&dev_priv->drm.struct_mutex)) {
 	case MUTEX_TRYLOCK_FAILED:
-		return false;
-
+		if (mutex_lock_killable(&dev_priv->drm.struct_mutex))
+			return false;
 	case MUTEX_TRYLOCK_SUCCESS:
 		*unlock = true;
 		return true;
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/5] drm/i915: Allow kswapd to pause the device whilst reaping
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
@ 2017-05-24 14:39 ` Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 3/5] drm/i915: Encourage our shrinker more when our shmemfs allocations fails Chris Wilson
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:39 UTC (permalink / raw)
  To: intel-gfx

In commit 5763ff04dc4e ("drm/i915: Avoid GPU stalls from kswapd") we
stopped direct reclaim and kswapd from triggering GPU/client stalls
whilst running (by restricting the objects they could reap to be idle).

However with abusive GPU usage, it becomes quite easy to starve kswapd
of memory and prevent it from making forward progress towards obtaining
enough free memory (thus driving the system closer to swap exhaustion).
Relax the previous restriction to allow kswapd (but not direct reclaim)
to stall the device whilst reaping purgeable pages.

v2: Also acquire the rpm wakelock to allow kswapd to unbind buffers.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem_shrinker.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index 03e08e1853e6..5fc1ddc268fb 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -324,6 +324,15 @@ i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
 					 sc->nr_to_scan - freed,
 					 I915_SHRINK_BOUND |
 					 I915_SHRINK_UNBOUND);
+	if (freed < sc->nr_to_scan && current_is_kswapd()) {
+		intel_runtime_pm_get(dev_priv);
+		freed += i915_gem_shrink(dev_priv,
+					 sc->nr_to_scan - freed,
+					 I915_SHRINK_ACTIVE |
+					 I915_SHRINK_BOUND |
+					 I915_SHRINK_UNBOUND);
+		intel_runtime_pm_put(dev_priv);
+	}
 
 	shrinker_unlock(dev_priv, unlock);
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/5] drm/i915: Encourage our shrinker more when our shmemfs allocations fails
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 2/5] drm/i915: Allow kswapd to pause the device whilst reaping Chris Wilson
@ 2017-05-24 14:39 ` Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 4/5] drm/i915: Remove __GFP_NORETRY from our buffer allocator Chris Wilson
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

Commit 24f8e00a8a2e ("drm/i915: Prefer to report ENOMEM rather than
incur the oom for gfx allocations") made the bold decision to try and
avoid the oomkiller by reporting -ENOMEM to userspace if our allocation
failed after attempting to free enough buffer objects. In short, it
appears we were giving up too easily (even before we start wondering if
one pass of reclaim is as strong as we would like). Part of the problem
is that if we only shrink just enough pages for our expected allocation,
the likelihood of those pages becoming available to us is less than 100%
To counter-act that we ask for twice the number of pages to be made
available. Furthermore, we allow the shrinker to pull pages from the
active list in later passes.

Fixes: 24f8e00a8a2e ("drm/i915: Prefer to report ENOMEM rather than incur the oom for gfx allocations")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_gem.c | 52 ++++++++++++++++++++++++-----------------
 1 file changed, 31 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a637cc05cc4a..62f8d1492c2d 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2337,8 +2337,8 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	struct page *page;
 	unsigned long last_pfn = 0;	/* suppress gcc warning */
 	unsigned int max_segment;
+	gfp_t noreclaim;
 	int ret;
-	gfp_t gfp;
 
 	/* Assert that the object is not currently in any GPU domain. As it
 	 * wasn't in the GTT, there shouldn't be any way it could have been in
@@ -2367,22 +2367,33 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 	 * Fail silently without starting the shrinker
 	 */
 	mapping = obj->base.filp->f_mapping;
-	gfp = mapping_gfp_constraint(mapping, ~(__GFP_IO | __GFP_RECLAIM));
-	gfp |= __GFP_NORETRY | __GFP_NOWARN;
+	noreclaim = mapping_gfp_constraint(mapping,
+					   ~(__GFP_IO | __GFP_RECLAIM));
+	noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
+
 	sg = st->sgl;
 	st->nents = 0;
 	for (i = 0; i < page_count; i++) {
-		page = shmem_read_mapping_page_gfp(mapping, i, gfp);
-		if (unlikely(IS_ERR(page))) {
-			i915_gem_shrink(dev_priv,
-					page_count,
-					I915_SHRINK_BOUND |
-					I915_SHRINK_UNBOUND |
-					I915_SHRINK_PURGEABLE);
+		const unsigned int shrink[] = {
+			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE,
+			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE | I915_SHRINK_ACTIVE,
+			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_ACTIVE,
+			0,
+		}, *s = shrink;
+		gfp_t gfp = noreclaim;
+
+		do {
 			page = shmem_read_mapping_page_gfp(mapping, i, gfp);
-		}
-		if (unlikely(IS_ERR(page))) {
-			gfp_t reclaim;
+			if (likely(!IS_ERR(page)))
+				break;
+
+			if (!*s) {
+				ret = PTR_ERR(page);
+				goto err_sg;
+			}
+
+			i915_gem_shrink(dev_priv, 2 * page_count, *s++);
+			cond_resched();
 
 			/* We've tried hard to allocate the memory by reaping
 			 * our own buffer, now let the real VM do its job and
@@ -2392,15 +2403,13 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 			 * defer the oom here by reporting the ENOMEM back
 			 * to userspace.
 			 */
-			reclaim = mapping_gfp_mask(mapping);
-			reclaim |= __GFP_NORETRY; /* reclaim, but no oom */
-
-			page = shmem_read_mapping_page_gfp(mapping, i, reclaim);
-			if (IS_ERR(page)) {
-				ret = PTR_ERR(page);
-				goto err_sg;
+			if (!*s) {
+				/* reclaim and warn, but no oom */
+				gfp = mapping_gfp_mask(mapping);
+				gfp |= __GFP_NORETRY;
 			}
-		}
+		} while (1);
+
 		if (!i ||
 		    sg->length >= max_segment ||
 		    page_to_pfn(page) != last_pfn + 1) {
@@ -4281,6 +4290,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
 
 	mapping = obj->base.filp->f_mapping;
 	mapping_set_gfp_mask(mapping, mask);
+	GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
 
 	i915_gem_object_init(obj, &i915_gem_object_ops);
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/5] drm/i915: Remove __GFP_NORETRY from our buffer allocator
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 2/5] drm/i915: Allow kswapd to pause the device whilst reaping Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 3/5] drm/i915: Encourage our shrinker more when our shmemfs allocations fails Chris Wilson
@ 2017-05-24 14:39 ` Chris Wilson
  2017-05-24 14:39 ` [PATCH v2 5/5] drm/i915: Revoke any shmemfs mappings on shrinking Chris Wilson
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:39 UTC (permalink / raw)
  To: intel-gfx; +Cc: Daniel Vetter

I tried __GFP_NORETRY in the belief that __GFP_RECLAIM was effective. It
struggles with handling reclaim via kswapd (through inconsistency within
throttle_direct_reclaim() and even then the race between multiple
allocators makes the two step of reclaim then allocate fragile), and as
our buffers are always dirty (with very few exceptions), we required
kswapd to perform pageout on them. The only effective means of waiting
on kswapd is to retry the allocations (i.e. not set __GFP_NORETRY). That
leaves us with the dilemma of invoking the oomkiller instead of
propagating the allocation failure back to userspace where it can be
handled more gracefully (one hopes). We cheat and note that __GFP_THISNODE
has the side-effect of preventing oom and has no consequence for our final
attempt at allocation.

Fixes: 24f8e00a8a2e ("drm/i915: Prefer to report ENOMEM rather than incur the oom for gfx allocations")
Testcase: igt/gem_tiled_swapping
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/i915/i915_gem.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 62f8d1492c2d..7d400d882283 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2406,7 +2406,21 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
 			if (!*s) {
 				/* reclaim and warn, but no oom */
 				gfp = mapping_gfp_mask(mapping);
-				gfp |= __GFP_NORETRY;
+
+				/* Our bo are always dirty and so we require
+				 * kswapd to reclaim our pages (direct reclaim
+				 * performs no swapping on its own). However,
+				 * direct reclaim is meant to wait for kswapd
+				 * when under pressure, this is broken. As a
+				 * result __GFP_RECLAIM is unreliable and fails
+				 * to actually reclaim dirty pages -- unless
+				 * you try over and over again with
+				 * !__GFP_NORETRY. However, we still want to
+				 * fail this allocation rather than trigger
+				 * the out-of-memory killer and for this we
+				 * subvert __GFP_THISNODE for that side effect.
+				 */
+				gfp |= __GFP_THISNODE;
 			}
 		} while (1);
 
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/5] drm/i915: Revoke any shmemfs mappings on shrinking
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
                   ` (2 preceding siblings ...)
  2017-05-24 14:39 ` [PATCH v2 4/5] drm/i915: Remove __GFP_NORETRY from our buffer allocator Chris Wilson
@ 2017-05-24 14:39 ` Chris Wilson
  2017-05-24 14:55 ` [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
  2017-05-24 14:57 ` ✓ Fi.CI.BAT: success for series starting with [v2,1/5] " Patchwork
  5 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:39 UTC (permalink / raw)
  To: intel-gfx

When trying to shrink our buffers, also revoke any existing mappings
(forcing them to be faulted again on reuse) to improve the likelihood of
us being able to pageout the buffer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
---
 drivers/gpu/drm/i915/i915_gem.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7d400d882283..a9388a5eef31 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2212,8 +2212,9 @@ void __i915_gem_object_invalidate(struct drm_i915_gem_object *obj)
 	if (obj->base.filp == NULL)
 		return;
 
-	mapping = obj->base.filp->f_mapping,
+	mapping = obj->base.filp->f_mapping;
 	invalidate_mapping_pages(mapping, 0, (loff_t)-1);
+	unmap_mapping_range(mapping, 0, (loff_t)-1, 0);
 }
 
 static void
-- 
2.11.0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
                   ` (3 preceding siblings ...)
  2017-05-24 14:39 ` [PATCH v2 5/5] drm/i915: Revoke any shmemfs mappings on shrinking Chris Wilson
@ 2017-05-24 14:55 ` Chris Wilson
  2017-05-24 15:31   ` Joonas Lahtinen
  2017-05-24 14:57 ` ✓ Fi.CI.BAT: success for series starting with [v2,1/5] " Patchwork
  5 siblings, 1 reply; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 14:55 UTC (permalink / raw)
  To: intel-gfx

On Wed, May 24, 2017 at 03:39:37PM +0100, Chris Wilson wrote:
> Having resolved whether or not we would deadlock upon a call to
> mutex_lock(&dev->struct_mutex), we can then wait for the contended
> struct_mutex if we are not the owner. This should significantly improve
> the chance of running the shrinker for other processes whilst the GPU is
> busy.
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Scratch this, cyclic deadlocks with the reclaim locks instead. If BAT
doesn't pick this up, I'll be woried.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [v2,1/5] drm/i915: Wait for struct_mutex inside shrinker
  2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
                   ` (4 preceding siblings ...)
  2017-05-24 14:55 ` [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
@ 2017-05-24 14:57 ` Patchwork
  2017-05-24 15:08   ` Chris Wilson
  5 siblings, 1 reply; 10+ messages in thread
From: Patchwork @ 2017-05-24 14:57 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v2,1/5] drm/i915: Wait for struct_mutex inside shrinker
URL   : https://patchwork.freedesktop.org/series/24874/
State : success

== Summary ==

Series 24874v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/24874/revisions/1/mbox/

fi-bdw-5557u     total:278  pass:267  dwarn:0   dfail:0   fail:0   skip:11  time:444s
fi-bdw-gvtdvm    total:278  pass:256  dwarn:8   dfail:0   fail:0   skip:14  time:431s
fi-bsw-n3050     total:278  pass:242  dwarn:0   dfail:0   fail:0   skip:36  time:586s
fi-bxt-j4205     total:278  pass:259  dwarn:0   dfail:0   fail:0   skip:19  time:504s
fi-byt-j1900     total:278  pass:254  dwarn:0   dfail:0   fail:0   skip:24  time:488s
fi-byt-n2820     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:481s
fi-hsw-4770      total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:415s
fi-hsw-4770r     total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:410s
fi-ilk-650       total:278  pass:228  dwarn:0   dfail:0   fail:0   skip:50  time:420s
fi-ivb-3520m     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:492s
fi-ivb-3770      total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:459s
fi-kbl-7500u     total:278  pass:255  dwarn:5   dfail:0   fail:0   skip:18  time:464s
fi-kbl-7560u     total:278  pass:263  dwarn:5   dfail:0   fail:0   skip:10  time:568s
fi-skl-6260u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:458s
fi-skl-6700hq    total:278  pass:239  dwarn:0   dfail:1   fail:17  skip:21  time:435s
fi-skl-6700k     total:278  pass:256  dwarn:4   dfail:0   fail:0   skip:18  time:467s
fi-skl-6770hq    total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:496s
fi-skl-gvtdvm    total:278  pass:265  dwarn:0   dfail:0   fail:0   skip:13  time:436s
fi-snb-2520m     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:529s
fi-snb-2600      total:278  pass:249  dwarn:0   dfail:0   fail:0   skip:29  time:406s

7808a0f3330f5bdf11b5b6880af8407ad6200989 drm-tip: 2017y-05m-24d-11h-04m-21s UTC integration manifest
0ab07f1 drm/i915: Revoke any shmemfs mappings on shrinking
6e58226 drm/i915: Remove __GFP_NORETRY from our buffer allocator
6479876 drm/i915: Encourage our shrinker more when our shmemfs allocations fails
e2b00bd drm/i915: Allow kswapd to pause the device whilst reaping
2baa732 drm/i915: Wait for struct_mutex inside shrinker

== Logs ==

For more details see: https://intel-gfx-ci.01.org/CI/Patchwork_4800/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: ✓ Fi.CI.BAT: success for series starting with [v2,1/5] drm/i915: Wait for struct_mutex inside shrinker
  2017-05-24 14:57 ` ✓ Fi.CI.BAT: success for series starting with [v2,1/5] " Patchwork
@ 2017-05-24 15:08   ` Chris Wilson
  0 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 15:08 UTC (permalink / raw)
  To: intel-gfx

On Wed, May 24, 2017 at 02:57:08PM -0000, Patchwork wrote:
> == Series Details ==
> 
> Series: series starting with [v2,1/5] drm/i915: Wait for struct_mutex inside shrinker
> URL   : https://patchwork.freedesktop.org/series/24874/
> State : success
> 
> == Summary ==
> 
> Series 24874v1 Series without cover letter
> https://patchwork.freedesktop.org/api/1.0/series/24874/revisions/1/mbox/
> 
> fi-bdw-5557u     total:278  pass:267  dwarn:0   dfail:0   fail:0   skip:11  time:444s
> fi-bdw-gvtdvm    total:278  pass:256  dwarn:8   dfail:0   fail:0   skip:14  time:431s
> fi-bsw-n3050     total:278  pass:242  dwarn:0   dfail:0   fail:0   skip:36  time:586s
> fi-bxt-j4205     total:278  pass:259  dwarn:0   dfail:0   fail:0   skip:19  time:504s
> fi-byt-j1900     total:278  pass:254  dwarn:0   dfail:0   fail:0   skip:24  time:488s
> fi-byt-n2820     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:481s
> fi-hsw-4770      total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:415s
> fi-hsw-4770r     total:278  pass:262  dwarn:0   dfail:0   fail:0   skip:16  time:410s
> fi-ilk-650       total:278  pass:228  dwarn:0   dfail:0   fail:0   skip:50  time:420s
> fi-ivb-3520m     total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:492s
> fi-ivb-3770      total:278  pass:260  dwarn:0   dfail:0   fail:0   skip:18  time:459s
> fi-kbl-7500u     total:278  pass:255  dwarn:5   dfail:0   fail:0   skip:18  time:464s
> fi-kbl-7560u     total:278  pass:263  dwarn:5   dfail:0   fail:0   skip:10  time:568s
> fi-skl-6260u     total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:458s
> fi-skl-6700hq    total:278  pass:239  dwarn:0   dfail:1   fail:17  skip:21  time:435s
> fi-skl-6700k     total:278  pass:256  dwarn:4   dfail:0   fail:0   skip:18  time:467s
> fi-skl-6770hq    total:278  pass:268  dwarn:0   dfail:0   fail:0   skip:10  time:496s
> fi-skl-gvtdvm    total:278  pass:265  dwarn:0   dfail:0   fail:0   skip:13  time:436s
> fi-snb-2520m     total:278  pass:250  dwarn:0   dfail:0   fail:0   skip:28  time:529s
> fi-snb-2600      total:278  pass:249  dwarn:0   dfail:0   fail:0   skip:29  time:406s

Hmm, I guess we have nothing that touches the shrinker in anger? In
between tests we invoke both i915_gem_shrink() and /proc/sys/vm/drop_caches
so should have some lockdep coverage, but it appears we don't trigger
mm/vmscan.c itself.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker
  2017-05-24 14:55 ` [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
@ 2017-05-24 15:31   ` Joonas Lahtinen
  2017-05-24 15:38     ` Chris Wilson
  0 siblings, 1 reply; 10+ messages in thread
From: Joonas Lahtinen @ 2017-05-24 15:31 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx

On ke, 2017-05-24 at 15:55 +0100, Chris Wilson wrote:
> On Wed, May 24, 2017 at 03:39:37PM +0100, Chris Wilson wrote:
> > 
> > Having resolved whether or not we would deadlock upon a call to
> > mutex_lock(&dev->struct_mutex), we can then wait for the contended
> > struct_mutex if we are not the owner. This should significantly improve
> > the chance of running the shrinker for other processes whilst the GPU is
> > busy.
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> 
> Scratch this, cyclic deadlocks with the reclaim locks instead. If BAT
> doesn't pick this up, I'll be woried.

Whole series or just this patch?

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker
  2017-05-24 15:31   ` Joonas Lahtinen
@ 2017-05-24 15:38     ` Chris Wilson
  0 siblings, 0 replies; 10+ messages in thread
From: Chris Wilson @ 2017-05-24 15:38 UTC (permalink / raw)
  To: Joonas Lahtinen; +Cc: intel-gfx

On Wed, May 24, 2017 at 06:31:10PM +0300, Joonas Lahtinen wrote:
> On ke, 2017-05-24 at 15:55 +0100, Chris Wilson wrote:
> > On Wed, May 24, 2017 at 03:39:37PM +0100, Chris Wilson wrote:
> > > 
> > > Having resolved whether or not we would deadlock upon a call to
> > > mutex_lock(&dev->struct_mutex), we can then wait for the contended
> > > struct_mutex if we are not the owner. This should significantly improve
> > > the chance of running the shrinker for other processes whilst the GPU is
> > > busy.
> > > 
> > > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > 
> > Scratch this, cyclic deadlocks with the reclaim locks instead. If BAT
> > doesn't pick this up, I'll be woried.
> 
> Whole series or just this patch?

Just this patch is buggy (as far as I am aware).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-05-24 15:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-24 14:39 [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
2017-05-24 14:39 ` [PATCH v2 2/5] drm/i915: Allow kswapd to pause the device whilst reaping Chris Wilson
2017-05-24 14:39 ` [PATCH v2 3/5] drm/i915: Encourage our shrinker more when our shmemfs allocations fails Chris Wilson
2017-05-24 14:39 ` [PATCH v2 4/5] drm/i915: Remove __GFP_NORETRY from our buffer allocator Chris Wilson
2017-05-24 14:39 ` [PATCH v2 5/5] drm/i915: Revoke any shmemfs mappings on shrinking Chris Wilson
2017-05-24 14:55 ` [PATCH v2 1/5] drm/i915: Wait for struct_mutex inside shrinker Chris Wilson
2017-05-24 15:31   ` Joonas Lahtinen
2017-05-24 15:38     ` Chris Wilson
2017-05-24 14:57 ` ✓ Fi.CI.BAT: success for series starting with [v2,1/5] " Patchwork
2017-05-24 15:08   ` Chris Wilson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.