* [PATCH] drm/i915: Only prune fences after wait-for-all
@ 2018-03-07 17:06 Chris Wilson
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Chris Wilson @ 2018-03-07 17:06 UTC (permalink / raw)
To: intel-gfx; +Cc: Matthew Auld
Currently, we only allow ourselves to prune the fences so long as
all the waits completed (i.e. all the fences we checked were signaled),
and that the reservation snapshot did not change across the wait.
However, if we only waited for a subset of the reservation object, i.e.
just waiting for the last writer to complete as opposed to all readers
as well, then we would erroneously conclude we could prune the fences as
indeed although all of our waits were successful, they did not represent
the totality of the reservation object.
Fixes: e54ca9774777 ("drm/i915: Remove completed fences after a wait")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a5bd07338b46..e3e52b9a74e9 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -407,7 +407,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
{
unsigned int seq = __read_seqcount_begin(&resv->seq);
struct dma_fence *excl;
- bool prune_fences = false;
+ bool prune_fences;
if (flags & I915_WAIT_ALL) {
struct dma_fence **shared;
@@ -432,17 +432,13 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
for (; i < count; i++)
dma_fence_put(shared[i]);
kfree(shared);
-
- prune_fences = count && timeout >= 0;
} else {
excl = reservation_object_get_excl_rcu(resv);
}
- if (excl && timeout >= 0) {
+ if (excl && timeout >= 0)
timeout = i915_gem_object_wait_fence(excl, flags, timeout,
rps_client);
- prune_fences = timeout >= 0;
- }
dma_fence_put(excl);
@@ -450,6 +446,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
* signaled and that the reservation object has not been changed (i.e.
* no new fences have been added).
*/
+ prune_fences = flags & I915_WAIT_ALL && timeout >= 0;
if (prune_fences && !__read_seqcount_retry(&resv->seq, seq)) {
if (reservation_object_trylock(resv)) {
if (!__read_seqcount_retry(&resv->seq, seq))
--
2.16.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2] drm/i915: Only prune fences after wait-for-all
2018-03-07 17:06 [PATCH] drm/i915: Only prune fences after wait-for-all Chris Wilson
@ 2018-03-07 17:13 ` Chris Wilson
2018-03-07 17:21 ` Chris Wilson
2018-03-08 17:48 ` Matthew Auld
2018-03-07 17:42 ` ✓ Fi.CI.BAT: success for drm/i915: Only prune fences after wait-for-all (rev2) Patchwork
2018-03-07 19:38 ` ✓ Fi.CI.IGT: " Patchwork
2 siblings, 2 replies; 7+ messages in thread
From: Chris Wilson @ 2018-03-07 17:13 UTC (permalink / raw)
To: intel-gfx; +Cc: Matthew Auld
Currently, we only allow ourselves to prune the fences so long as
all the waits completed (i.e. all the fences we checked were signaled),
and that the reservation snapshot did not change across the wait.
However, if we only waited for a subset of the reservation object, i.e.
just waiting for the last writer to complete as opposed to all readers
as well, then we would erroneously conclude we could prune the fences as
indeed although all of our waits were successful, they did not represent
the totality of the reservation object.
v2: We only need to check the shared fences due to construction (i.e.
all of the shared fences will be later than the exclusive fence, if
any).
Fixes: e54ca9774777 ("drm/i915: Remove completed fences after a wait")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a5bd07338b46..9b48b5101357 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -433,20 +433,28 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
dma_fence_put(shared[i]);
kfree(shared);
+ /*
+ * If both shared fences and an exclusive fence exist,
+ * then by construction the shared fences must be later
+ * than the exclusive fence. If we successfully wait for
+ * all the shared fences, we know that the exclusive fence
+ * must all be signaled. If all the shared fences are
+ * signaled, we can prune the array and recover the
+ * floating references on the fences/requests.
+ */
prune_fences = count && timeout >= 0;
} else {
excl = reservation_object_get_excl_rcu(resv);
}
- if (excl && timeout >= 0) {
+ if (excl && timeout >= 0)
timeout = i915_gem_object_wait_fence(excl, flags, timeout,
rps_client);
- prune_fences = timeout >= 0;
- }
dma_fence_put(excl);
- /* Oportunistically prune the fences iff we know they have *all* been
+ /*
+ * Oportunistically prune the fences iff we know they have *all* been
* signaled and that the reservation object has not been changed (i.e.
* no new fences have been added).
*/
--
2.16.2
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2] drm/i915: Only prune fences after wait-for-all
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
@ 2018-03-07 17:21 ` Chris Wilson
2018-03-08 17:48 ` Matthew Auld
1 sibling, 0 replies; 7+ messages in thread
From: Chris Wilson @ 2018-03-07 17:21 UTC (permalink / raw)
To: intel-gfx; +Cc: Matthew Auld
Quoting Chris Wilson (2018-03-07 17:13:03)
> Currently, we only allow ourselves to prune the fences so long as
> all the waits completed (i.e. all the fences we checked were signaled),
> and that the reservation snapshot did not change across the wait.
> However, if we only waited for a subset of the reservation object, i.e.
> just waiting for the last writer to complete as opposed to all readers
> as well, then we would erroneously conclude we could prune the fences as
> indeed although all of our waits were successful, they did not represent
> the totality of the reservation object.
>
> v2: We only need to check the shared fences due to construction (i.e.
> all of the shared fences will be later than the exclusive fence, if
> any).
>
Testcase: igt/drv_hangman
As it turns out, that is pretty much the only way we can observe this
bug. (At least not without drinking more than one cup of tea.)
You could construct something to try and overwrite the reader on another
engine, tricky to observe though. However, it is the goal of
selftests/live_coherency and gem_exec_flush to try and catch this type
of error, so should try harder.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 7+ messages in thread
* ✓ Fi.CI.BAT: success for drm/i915: Only prune fences after wait-for-all (rev2)
2018-03-07 17:06 [PATCH] drm/i915: Only prune fences after wait-for-all Chris Wilson
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
@ 2018-03-07 17:42 ` Patchwork
2018-03-07 19:38 ` ✓ Fi.CI.IGT: " Patchwork
2 siblings, 0 replies; 7+ messages in thread
From: Patchwork @ 2018-03-07 17:42 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Only prune fences after wait-for-all (rev2)
URL : https://patchwork.freedesktop.org/series/39547/
State : success
== Summary ==
Series 39547v2 drm/i915: Only prune fences after wait-for-all
https://patchwork.freedesktop.org/api/1.0/series/39547/revisions/2/mbox/
---- Known issues:
Test kms_pipe_crc_basic:
Subgroup suspend-read-crc-pipe-b:
pass -> INCOMPLETE (fi-snb-2520m) fdo#103713
fdo#103713 https://bugs.freedesktop.org/show_bug.cgi?id=103713
fi-bdw-5557u total:288 pass:267 dwarn:0 dfail:0 fail:0 skip:21 time:422s
fi-bdw-gvtdvm total:288 pass:264 dwarn:0 dfail:0 fail:0 skip:24 time:421s
fi-blb-e6850 total:288 pass:223 dwarn:1 dfail:0 fail:0 skip:64 time:372s
fi-bsw-n3050 total:288 pass:242 dwarn:0 dfail:0 fail:0 skip:46 time:503s
fi-bwr-2160 total:288 pass:183 dwarn:0 dfail:0 fail:0 skip:105 time:279s
fi-bxt-dsi total:288 pass:258 dwarn:0 dfail:0 fail:0 skip:30 time:488s
fi-bxt-j4205 total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:492s
fi-byt-j1900 total:288 pass:253 dwarn:0 dfail:0 fail:0 skip:35 time:478s
fi-byt-n2820 total:288 pass:249 dwarn:0 dfail:0 fail:0 skip:39 time:469s
fi-cfl-8700k total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:407s
fi-cfl-s2 total:288 pass:262 dwarn:0 dfail:0 fail:0 skip:26 time:579s
fi-cfl-u total:288 pass:262 dwarn:0 dfail:0 fail:0 skip:26 time:504s
fi-elk-e7500 total:288 pass:229 dwarn:0 dfail:0 fail:0 skip:59 time:414s
fi-gdg-551 total:288 pass:179 dwarn:0 dfail:0 fail:1 skip:108 time:287s
fi-glk-1 total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:514s
fi-hsw-4770 total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:395s
fi-ilk-650 total:288 pass:228 dwarn:0 dfail:0 fail:0 skip:60 time:411s
fi-ivb-3520m total:288 pass:259 dwarn:0 dfail:0 fail:0 skip:29 time:454s
fi-ivb-3770 total:288 pass:255 dwarn:0 dfail:0 fail:0 skip:33 time:421s
fi-kbl-7500u total:288 pass:263 dwarn:1 dfail:0 fail:0 skip:24 time:467s
fi-kbl-7567u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:462s
fi-kbl-r total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:510s
fi-pnv-d510 total:288 pass:222 dwarn:1 dfail:0 fail:0 skip:65 time:585s
fi-skl-6260u total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:425s
fi-skl-6600u total:288 pass:261 dwarn:0 dfail:0 fail:0 skip:27 time:521s
fi-skl-6700hq total:288 pass:262 dwarn:0 dfail:0 fail:0 skip:26 time:534s
fi-skl-6700k2 total:288 pass:264 dwarn:0 dfail:0 fail:0 skip:24 time:498s
fi-skl-6770hq total:288 pass:268 dwarn:0 dfail:0 fail:0 skip:20 time:484s
fi-skl-guc total:288 pass:260 dwarn:0 dfail:0 fail:0 skip:28 time:425s
fi-skl-gvtdvm total:288 pass:265 dwarn:0 dfail:0 fail:0 skip:23 time:425s
fi-snb-2520m total:245 pass:211 dwarn:0 dfail:0 fail:0 skip:33
fi-snb-2600 total:288 pass:248 dwarn:0 dfail:0 fail:0 skip:40 time:388s
c362488c07f3f59581a479b2f3b82219fb5bfee3 drm-tip: 2018y-03m-07d-15h-23m-59s UTC integration manifest
3f4f3a155ac2 drm/i915: Only prune fences after wait-for-all
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8261/issues.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 7+ messages in thread
* ✓ Fi.CI.IGT: success for drm/i915: Only prune fences after wait-for-all (rev2)
2018-03-07 17:06 [PATCH] drm/i915: Only prune fences after wait-for-all Chris Wilson
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
2018-03-07 17:42 ` ✓ Fi.CI.BAT: success for drm/i915: Only prune fences after wait-for-all (rev2) Patchwork
@ 2018-03-07 19:38 ` Patchwork
2 siblings, 0 replies; 7+ messages in thread
From: Patchwork @ 2018-03-07 19:38 UTC (permalink / raw)
To: Chris Wilson; +Cc: intel-gfx
== Series Details ==
Series: drm/i915: Only prune fences after wait-for-all (rev2)
URL : https://patchwork.freedesktop.org/series/39547/
State : success
== Summary ==
---- Known issues:
Test gem_eio:
Subgroup in-flight-external:
incomplete -> PASS (shard-apl) fdo#105341
Test kms_chv_cursor_fail:
Subgroup pipe-b-256x256-left-edge:
dmesg-warn -> INCOMPLETE (shard-snb) fdo#105185 +3
Test kms_cursor_crc:
Subgroup cursor-256x256-suspend:
fail -> PASS (shard-apl) fdo#103375
Test kms_frontbuffer_tracking:
Subgroup fbc-rgb565-draw-mmap-wc:
pass -> FAIL (shard-apl) fdo#101623 +1
Test kms_rotation_crc:
Subgroup primary-rotation-180:
pass -> FAIL (shard-snb) fdo#103925
Test kms_sysfs_edid_timing:
pass -> WARN (shard-apl) fdo#100047
Test pm_lpsp:
Subgroup screens-disabled:
fail -> PASS (shard-hsw) fdo#104941
fdo#105341 https://bugs.freedesktop.org/show_bug.cgi?id=105341
fdo#105185 https://bugs.freedesktop.org/show_bug.cgi?id=105185
fdo#103375 https://bugs.freedesktop.org/show_bug.cgi?id=103375
fdo#101623 https://bugs.freedesktop.org/show_bug.cgi?id=101623
fdo#103925 https://bugs.freedesktop.org/show_bug.cgi?id=103925
fdo#100047 https://bugs.freedesktop.org/show_bug.cgi?id=100047
fdo#104941 https://bugs.freedesktop.org/show_bug.cgi?id=104941
shard-apl total:3467 pass:1823 dwarn:1 dfail:0 fail:10 skip:1632 time:12136s
shard-hsw total:3467 pass:1773 dwarn:1 dfail:0 fail:1 skip:1691 time:11607s
shard-snb total:3432 pass:1350 dwarn:1 dfail:0 fail:2 skip:2078 time:6772s
Blacklisted hosts:
shard-kbl total:3308 pass:1859 dwarn:2 dfail:0 fail:7 skip:1438 time:8602s
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_8261/shards.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] drm/i915: Only prune fences after wait-for-all
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
2018-03-07 17:21 ` Chris Wilson
@ 2018-03-08 17:48 ` Matthew Auld
2018-03-08 18:04 ` Chris Wilson
1 sibling, 1 reply; 7+ messages in thread
From: Matthew Auld @ 2018-03-08 17:48 UTC (permalink / raw)
To: Chris Wilson; +Cc: Intel Graphics Development, Matthew Auld
On 7 March 2018 at 17:13, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> Currently, we only allow ourselves to prune the fences so long as
> all the waits completed (i.e. all the fences we checked were signaled),
> and that the reservation snapshot did not change across the wait.
> However, if we only waited for a subset of the reservation object, i.e.
> just waiting for the last writer to complete as opposed to all readers
> as well, then we would erroneously conclude we could prune the fences as
> indeed although all of our waits were successful, they did not represent
> the totality of the reservation object.
>
> v2: We only need to check the shared fences due to construction (i.e.
> all of the shared fences will be later than the exclusive fence, if
> any).
>
> Fixes: e54ca9774777 ("drm/i915: Remove completed fences after a wait")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> ---
> drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index a5bd07338b46..9b48b5101357 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -433,20 +433,28 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
> dma_fence_put(shared[i]);
> kfree(shared);
>
> + /*
> + * If both shared fences and an exclusive fence exist,
> + * then by construction the shared fences must be later
> + * than the exclusive fence. If we successfully wait for
> + * all the shared fences, we know that the exclusive fence
> + * must all be signaled. If all the shared fences are
> + * signaled, we can prune the array and recover the
> + * floating references on the fences/requests.
> + */
> prune_fences = count && timeout >= 0;
> } else {
> excl = reservation_object_get_excl_rcu(resv);
> }
>
> - if (excl && timeout >= 0) {
> + if (excl && timeout >= 0)
> timeout = i915_gem_object_wait_fence(excl, flags, timeout,
> rps_client);
> - prune_fences = timeout >= 0;
> - }
>
> dma_fence_put(excl);
>
> - /* Oportunistically prune the fences iff we know they have *all* been
> + /*
> + * Oportunistically prune the fences iff we know they have *all* been
Opportunistically
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] drm/i915: Only prune fences after wait-for-all
2018-03-08 17:48 ` Matthew Auld
@ 2018-03-08 18:04 ` Chris Wilson
0 siblings, 0 replies; 7+ messages in thread
From: Chris Wilson @ 2018-03-08 18:04 UTC (permalink / raw)
To: Matthew Auld; +Cc: Intel Graphics Development, Matthew Auld
Quoting Matthew Auld (2018-03-08 17:48:48)
> On 7 March 2018 at 17:13, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > Currently, we only allow ourselves to prune the fences so long as
> > all the waits completed (i.e. all the fences we checked were signaled),
> > and that the reservation snapshot did not change across the wait.
> > However, if we only waited for a subset of the reservation object, i.e.
> > just waiting for the last writer to complete as opposed to all readers
> > as well, then we would erroneously conclude we could prune the fences as
> > indeed although all of our waits were successful, they did not represent
> > the totality of the reservation object.
> >
> > v2: We only need to check the shared fences due to construction (i.e.
> > all of the shared fences will be later than the exclusive fence, if
> > any).
> >
> > Fixes: e54ca9774777 ("drm/i915: Remove completed fences after a wait")
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > ---
> > drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++----
> > 1 file changed, 12 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index a5bd07338b46..9b48b5101357 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -433,20 +433,28 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
> > dma_fence_put(shared[i]);
> > kfree(shared);
> >
> > + /*
> > + * If both shared fences and an exclusive fence exist,
> > + * then by construction the shared fences must be later
> > + * than the exclusive fence. If we successfully wait for
> > + * all the shared fences, we know that the exclusive fence
> > + * must all be signaled. If all the shared fences are
> > + * signaled, we can prune the array and recover the
> > + * floating references on the fences/requests.
> > + */
> > prune_fences = count && timeout >= 0;
> > } else {
> > excl = reservation_object_get_excl_rcu(resv);
> > }
> >
> > - if (excl && timeout >= 0) {
> > + if (excl && timeout >= 0)
> > timeout = i915_gem_object_wait_fence(excl, flags, timeout,
> > rps_client);
> > - prune_fences = timeout >= 0;
> > - }
> >
> > dma_fence_put(excl);
> >
> > - /* Oportunistically prune the fences iff we know they have *all* been
> > + /*
> > + * Oportunistically prune the fences iff we know they have *all* been
> Opportunistically
>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
And a bonus spelling fix, thank you!
Pushed,
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-03-08 18:04 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-07 17:06 [PATCH] drm/i915: Only prune fences after wait-for-all Chris Wilson
2018-03-07 17:13 ` [PATCH v2] " Chris Wilson
2018-03-07 17:21 ` Chris Wilson
2018-03-08 17:48 ` Matthew Auld
2018-03-08 18:04 ` Chris Wilson
2018-03-07 17:42 ` ✓ Fi.CI.BAT: success for drm/i915: Only prune fences after wait-for-all (rev2) Patchwork
2018-03-07 19:38 ` ✓ Fi.CI.IGT: " Patchwork
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.