dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drm/nouveau: wait for the exclusive fence after the shared ones v2
@ 2021-12-09 10:23 Christian König
  2021-12-10  9:06 ` [Nouveau] " Thorsten Leemhuis
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Christian König @ 2021-12-09 10:23 UTC (permalink / raw)
  To: dmoulding, sf, bskeggs; +Cc: nouveau, dri-devel

Always waiting for the exclusive fence resulted on some performance
regressions. So try to wait for the shared fences first, then the
exclusive fence should always be signaled already.

v2: fix incorrectly placed "(", add some comment why we do this.

Signed-off-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/nouveau/nouveau_fence.c | 28 +++++++++++++------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
index 05d0b3eb3690..0ae416aa76dc 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
+++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
@@ -353,15 +353,22 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool e
 
 		if (ret)
 			return ret;
-	}
 
-	fobj = dma_resv_shared_list(resv);
-	fence = dma_resv_excl_fence(resv);
+		fobj = NULL;
+	} else {
+		fobj = dma_resv_shared_list(resv);
+	}
 
-	if (fence) {
+	/* Waiting for the exclusive fence first causes performance regressions
+	 * under some circumstances. So manually wait for the shared ones first.
+	 */
+	for (i = 0; i < (fobj ? fobj->shared_count : 0) && !ret; ++i) {
 		struct nouveau_channel *prev = NULL;
 		bool must_wait = true;
 
+		fence = rcu_dereference_protected(fobj->shared[i],
+						dma_resv_held(resv));
+
 		f = nouveau_local_fence(fence, chan->drm);
 		if (f) {
 			rcu_read_lock();
@@ -373,20 +380,13 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool e
 
 		if (must_wait)
 			ret = dma_fence_wait(fence, intr);
-
-		return ret;
 	}
 
-	if (!exclusive || !fobj)
-		return ret;
-
-	for (i = 0; i < fobj->shared_count && !ret; ++i) {
+	fence = dma_resv_excl_fence(resv);
+	if (fence) {
 		struct nouveau_channel *prev = NULL;
 		bool must_wait = true;
 
-		fence = rcu_dereference_protected(fobj->shared[i],
-						dma_resv_held(resv));
-
 		f = nouveau_local_fence(fence, chan->drm);
 		if (f) {
 			rcu_read_lock();
@@ -398,6 +398,8 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool e
 
 		if (must_wait)
 			ret = dma_fence_wait(fence, intr);
+
+		return ret;
 	}
 
 	return ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-12-21 10:20 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-09 10:23 [PATCH] drm/nouveau: wait for the exclusive fence after the shared ones v2 Christian König
2021-12-10  9:06 ` [Nouveau] " Thorsten Leemhuis
2021-12-11  9:59 ` Stefan Fritsch
2021-12-14  9:19   ` Christian König
2021-12-15 22:32     ` [Nouveau] " Ben Skeggs
2021-12-21 10:11       ` Thorsten Leemhuis
2021-12-21 10:20         ` Christian König
2021-12-20 19:17 ` Dan Moulding

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).