From: Daniel Vetter <daniel.vetter@ffwll.ch> To: DRI Development <dri-devel@lists.freedesktop.org> Cc: "Jack Zhang" <Jack.Zhang1@amd.com>, "Daniel Vetter" <daniel.vetter@ffwll.ch>, "Intel Graphics Development" <intel-gfx@lists.freedesktop.org>, "Luben Tuikov" <luben.tuikov@amd.com>, "Alex Deucher" <alexander.deucher@amd.com>, "Daniel Vetter" <daniel.vetter@intel.com>, "Christian König" <christian.koenig@amd.com> Subject: [PATCH v3 13/20] drm/sched: Don't store self-dependencies Date: Thu, 8 Jul 2021 19:37:47 +0200 [thread overview] Message-ID: <20210708173754.3877540-14-daniel.vetter@ffwll.ch> (raw) In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> This is essentially part of drm_sched_dependency_optimized(), which only amdgpu seems to make use of. Use it a bit more. This would mean that as-is amdgpu can't use the dependency helpers, at least not with the current approach amdgpu has for deciding whether a vm_flush is needed. Since amdgpu also has very special rules around implicit fencing it can't use those helpers either, and adding a drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too onerous. That way the special case handling for amdgpu sticks even more out and we have higher chances that reviewers that go across all drivers wont miss it. Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Luben Tuikov <luben.tuikov@amd.com> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Jack Zhang <Jack.Zhang1@amd.com> --- drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index ad62f1d2991c..db326a1ebf3c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -654,6 +654,13 @@ int drm_sched_job_await_fence(struct drm_sched_job *job, if (!fence) return 0; + /* if it's a fence from us it's guaranteed to be earlier */ + if (fence->context == job->entity->fence_context || + fence->context == job->entity->fence_context + 1) { + dma_fence_put(fence); + return 0; + } + /* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs. -- 2.32.0
WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch> To: DRI Development <dri-devel@lists.freedesktop.org> Cc: "Andrey Grodzovsky" <andrey.grodzovsky@amd.com>, "Jack Zhang" <Jack.Zhang1@amd.com>, "Daniel Vetter" <daniel.vetter@ffwll.ch>, "Intel Graphics Development" <intel-gfx@lists.freedesktop.org>, "Luben Tuikov" <luben.tuikov@amd.com>, "Alex Deucher" <alexander.deucher@amd.com>, "Daniel Vetter" <daniel.vetter@intel.com>, "Christian König" <christian.koenig@amd.com>, "Lucas Stach" <l.stach@pengutronix.de> Subject: [Intel-gfx] [PATCH v3 13/20] drm/sched: Don't store self-dependencies Date: Thu, 8 Jul 2021 19:37:47 +0200 [thread overview] Message-ID: <20210708173754.3877540-14-daniel.vetter@ffwll.ch> (raw) In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> This is essentially part of drm_sched_dependency_optimized(), which only amdgpu seems to make use of. Use it a bit more. This would mean that as-is amdgpu can't use the dependency helpers, at least not with the current approach amdgpu has for deciding whether a vm_flush is needed. Since amdgpu also has very special rules around implicit fencing it can't use those helpers either, and adding a drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too onerous. That way the special case handling for amdgpu sticks even more out and we have higher chances that reviewers that go across all drivers wont miss it. Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Luben Tuikov <luben.tuikov@amd.com> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Jack Zhang <Jack.Zhang1@amd.com> --- drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index ad62f1d2991c..db326a1ebf3c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -654,6 +654,13 @@ int drm_sched_job_await_fence(struct drm_sched_job *job, if (!fence) return 0; + /* if it's a fence from us it's guaranteed to be earlier */ + if (fence->context == job->entity->fence_context || + fence->context == job->entity->fence_context + 1) { + dma_fence_put(fence); + return 0; + } + /* Deduplicate if we already depend on a fence from the same context. * This lets the size of the array of deps scale with the number of * engines involved, rather than the number of BOs. -- 2.32.0 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-07-08 17:38 UTC|newest] Thread overview: 84+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-07-08 17:37 [PATCH v3 00/20] drm/sched dependency tracking and dma-resv fixes Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 01/20] drm/sched: entity->rq selection cannot fail Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-09 6:53 ` Christian König 2021-07-09 6:53 ` [Intel-gfx] " Christian König 2021-07-09 7:14 ` Daniel Vetter 2021-07-09 7:14 ` [Intel-gfx] " Daniel Vetter 2021-07-09 7:23 ` Christian König 2021-07-09 7:23 ` [Intel-gfx] " Christian König 2021-07-09 8:00 ` Daniel Vetter 2021-07-09 8:00 ` [Intel-gfx] " Daniel Vetter 2021-07-09 8:11 ` Christian König 2021-07-09 8:11 ` [Intel-gfx] " Christian König 2021-07-08 17:37 ` [PATCH v3 02/20] drm/sched: Split drm_sched_job_init Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 03/20] drm/sched: Barriers are needed for entity->last_scheduled Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 18:56 ` Andrey Grodzovsky 2021-07-08 18:56 ` [Intel-gfx] " Andrey Grodzovsky 2021-07-08 19:53 ` Daniel Vetter 2021-07-08 19:53 ` [Intel-gfx] " Daniel Vetter 2021-07-08 21:54 ` [PATCH] " Daniel Vetter 2021-07-08 21:54 ` [Intel-gfx] " Daniel Vetter 2021-07-09 6:57 ` Christian König 2021-07-09 6:57 ` [Intel-gfx] " Christian König 2021-07-09 7:40 ` Daniel Vetter 2021-07-09 7:40 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 04/20] drm/sched: Add dependency tracking Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 05/20] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 06/20] drm/sched: improve docs around drm_sched_entity Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 07/20] drm/panfrost: use scheduler dependency tracking Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-12 9:19 ` Steven Price 2021-07-12 9:19 ` [Intel-gfx] " Steven Price 2021-07-12 9:19 ` Steven Price 2021-07-08 17:37 ` [PATCH v3 08/20] drm/lima: " Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 09/20] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 10/20] drm/v3d: Use scheduler dependency handling Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 11/20] drm/etnaviv: " Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 12/20] drm/gem: Delete gem array fencing helpers Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter [this message] 2021-07-08 17:37 ` [Intel-gfx] [PATCH v3 13/20] drm/sched: Don't store self-dependencies Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 14/20] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 15/20] drm/msm: Don't break exclusive fence ordering Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 16/20] drm/msm: always wait for the exclusive fence Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-09 8:48 ` Christian König 2021-07-09 8:48 ` [Intel-gfx] " Christian König 2021-07-09 8:48 ` Christian König 2021-07-09 9:15 ` Daniel Vetter 2021-07-09 9:15 ` [Intel-gfx] " Daniel Vetter 2021-07-09 9:15 ` Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 17/20] drm/etnaviv: Don't break exclusive fence ordering Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 18/20] drm/i915: delete exclude argument from i915_sw_fence_await_reservation Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 19/20] drm/i915: Don't break exclusive fence ordering Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` [PATCH v3 20/20] dma-resv: Give the docs a do-over Daniel Vetter 2021-07-08 17:37 ` [Intel-gfx] " Daniel Vetter 2021-07-08 17:37 ` Daniel Vetter 2021-07-09 0:03 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency tracking and dma-resv fixes (rev2) Patchwork 2021-07-09 0:29 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-07-09 15:27 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210708173754.3877540-14-daniel.vetter@ffwll.ch \ --to=daniel.vetter@ffwll.ch \ --cc=Jack.Zhang1@amd.com \ --cc=alexander.deucher@amd.com \ --cc=christian.koenig@amd.com \ --cc=daniel.vetter@intel.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=luben.tuikov@amd.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.