All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: DRI Development <dri-devel@lists.freedesktop.org>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Lucas Stach <l.stach@pengutronix.de>,
	Melissa Wen <mwen@igalia.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Luben Tuikov <luben.tuikov@amd.com>,
	Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
	Alex Deucher <alexander.deucher@amd.com>
Subject: Re: [PATCH v5 14/20] drm/sched: Don't store self-dependencies
Date: Thu, 5 Aug 2021 15:57:05 +0200	[thread overview]
Message-ID: <32f5f17d-9c2b-c6e3-9809-4100bcadf21c@amd.com> (raw)
In-Reply-To: <CAKMK7uEVbTOuJVvQDybFM+9x4LUP+ojcCzWfkp4sau3oX7fjXQ@mail.gmail.com>

Am 05.08.21 um 15:25 schrieb Daniel Vetter:
> On Thu, Aug 5, 2021 at 3:18 PM Christian König <christian.koenig@amd.com> wrote:
>>
>>
>> Am 05.08.21 um 12:46 schrieb Daniel Vetter:
>>> This is essentially part of drm_sched_dependency_optimized(), which
>>> only amdgpu seems to make use of. Use it a bit more.
>>>
>>> This would mean that as-is amdgpu can't use the dependency helpers, at
>>> least not with the current approach amdgpu has for deciding whether a
>>> vm_flush is needed. Since amdgpu also has very special rules around
>>> implicit fencing it can't use those helpers either, and adding a
>>> drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too
>>> onerous. That way the special case handling for amdgpu sticks even
>>> more out and we have higher chances that reviewers that go across all
>>> drivers wont miss it.
>> Well you should probably drop the sentence about the vm_flush, this is
>> completely unrelated.
>>
>> Additional to that I still don't think that this is a good idea.
>> Dependency handling is something completely driver specific.
>>
>> E.g. even when you have submitted jobs back to back they still might
>> need a cache flush in between and that is not only for amdgpu like this.
>>
>> What you can do is to optimize for while looking at the fences later on
>> and then note that you have done so and what the last hw fence is you
>> used instead.
> Out of 6 drivers using drm/sched 5 can use this. When we get i915
> over, that one will be added to the list. amdgpu can't use any of this
> anyway due to the vm_id allocation requirements, which is why I
> mention that. Also note that all the callbacks are still there, so you
> can just ignore this all and still build your own. Like amdgpu does.

The VMID allocation stuff is rather easy to handle, that's why I noted 
we should remove that sentence.

The problematic stuff is handling the cache flush and pipeline sync 
which you make impossible with this here.

> So I'm not sure what exactly your object is, aside from "this doesn't
> fit for amdgpu", which a) I know b) the commit message explains c)
> doesn't actually hurt amdgpu in the slightest. And we still get the
> benefit that for most drivers it's a nice optimization.

Well exactly that's what I wanted to avoid. We still can use this in 
amdgpu even with the VMID allocation stuff and I still hope to do so.

Can't we add this as a wrapper or similar?

Christian.

> -Daniel
>
>> Regards,
>> Christian.
>>
>>> Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
>>> Acked-by: Melissa Wen <mwen@igalia.com>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: "Christian König" <christian.koenig@amd.com>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Luben Tuikov <luben.tuikov@amd.com>
>>> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++
>>>    1 file changed, 7 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>> index f77456929139..49e507f91ec0 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -660,6 +660,13 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job,
>>>        if (!fence)
>>>                return 0;
>>>
>>> +     /* if it's a fence from us it's guaranteed to be earlier */
>>> +     if (fence->context == job->entity->fence_context ||
>>> +         fence->context == job->entity->fence_context + 1) {
>>> +             dma_fence_put(fence);
>>> +             return 0;
>>> +     }
>>> +
>>>        /* Deduplicate if we already depend on a fence from the same context.
>>>         * This lets the size of the array of deps scale with the number of
>>>         * engines involved, rather than the number of BOs.
>


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: DRI Development <dri-devel@lists.freedesktop.org>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Lucas Stach <l.stach@pengutronix.de>,
	Melissa Wen <mwen@igalia.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Luben Tuikov <luben.tuikov@amd.com>,
	Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
	Alex Deucher <alexander.deucher@amd.com>
Subject: Re: [Intel-gfx] [PATCH v5 14/20] drm/sched: Don't store self-dependencies
Date: Thu, 5 Aug 2021 15:57:05 +0200	[thread overview]
Message-ID: <32f5f17d-9c2b-c6e3-9809-4100bcadf21c@amd.com> (raw)
In-Reply-To: <CAKMK7uEVbTOuJVvQDybFM+9x4LUP+ojcCzWfkp4sau3oX7fjXQ@mail.gmail.com>

Am 05.08.21 um 15:25 schrieb Daniel Vetter:
> On Thu, Aug 5, 2021 at 3:18 PM Christian König <christian.koenig@amd.com> wrote:
>>
>>
>> Am 05.08.21 um 12:46 schrieb Daniel Vetter:
>>> This is essentially part of drm_sched_dependency_optimized(), which
>>> only amdgpu seems to make use of. Use it a bit more.
>>>
>>> This would mean that as-is amdgpu can't use the dependency helpers, at
>>> least not with the current approach amdgpu has for deciding whether a
>>> vm_flush is needed. Since amdgpu also has very special rules around
>>> implicit fencing it can't use those helpers either, and adding a
>>> drm_sched_job_await_fence_always or similar for amdgpu wouldn't be too
>>> onerous. That way the special case handling for amdgpu sticks even
>>> more out and we have higher chances that reviewers that go across all
>>> drivers wont miss it.
>> Well you should probably drop the sentence about the vm_flush, this is
>> completely unrelated.
>>
>> Additional to that I still don't think that this is a good idea.
>> Dependency handling is something completely driver specific.
>>
>> E.g. even when you have submitted jobs back to back they still might
>> need a cache flush in between and that is not only for amdgpu like this.
>>
>> What you can do is to optimize for while looking at the fences later on
>> and then note that you have done so and what the last hw fence is you
>> used instead.
> Out of 6 drivers using drm/sched 5 can use this. When we get i915
> over, that one will be added to the list. amdgpu can't use any of this
> anyway due to the vm_id allocation requirements, which is why I
> mention that. Also note that all the callbacks are still there, so you
> can just ignore this all and still build your own. Like amdgpu does.

The VMID allocation stuff is rather easy to handle, that's why I noted 
we should remove that sentence.

The problematic stuff is handling the cache flush and pipeline sync 
which you make impossible with this here.

> So I'm not sure what exactly your object is, aside from "this doesn't
> fit for amdgpu", which a) I know b) the commit message explains c)
> doesn't actually hurt amdgpu in the slightest. And we still get the
> benefit that for most drivers it's a nice optimization.

Well exactly that's what I wanted to avoid. We still can use this in 
amdgpu even with the VMID allocation stuff and I still hope to do so.

Can't we add this as a wrapper or similar?

Christian.

> -Daniel
>
>> Regards,
>> Christian.
>>
>>> Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
>>> Acked-by: Melissa Wen <mwen@igalia.com>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: "Christian König" <christian.koenig@amd.com>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Luben Tuikov <luben.tuikov@amd.com>
>>> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_main.c | 7 +++++++
>>>    1 file changed, 7 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>> index f77456929139..49e507f91ec0 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -660,6 +660,13 @@ int drm_sched_job_add_dependency(struct drm_sched_job *job,
>>>        if (!fence)
>>>                return 0;
>>>
>>> +     /* if it's a fence from us it's guaranteed to be earlier */
>>> +     if (fence->context == job->entity->fence_context ||
>>> +         fence->context == job->entity->fence_context + 1) {
>>> +             dma_fence_put(fence);
>>> +             return 0;
>>> +     }
>>> +
>>>        /* Deduplicate if we already depend on a fence from the same context.
>>>         * This lets the size of the array of deps scale with the number of
>>>         * engines involved, rather than the number of BOs.
>


  reply	other threads:[~2021-08-05 13:57 UTC|newest]

Thread overview: 122+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-05 10:46 [PATCH v5 00/20] drm/sched dependency handling and implicit sync fixes Daniel Vetter
2021-08-05 10:46 ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 01/20] drm/sched: Split drm_sched_job_init Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:43   ` Christian König
2021-08-05 13:43     ` [Intel-gfx] " Christian König
2021-08-05 14:07     ` Daniel Vetter
2021-08-05 14:07       ` [Intel-gfx] " Daniel Vetter
2021-08-05 14:47       ` Christian König
2021-08-05 14:47         ` [Intel-gfx] " Christian König
2021-08-05 15:07         ` Daniel Vetter
2021-08-05 15:07           ` [Intel-gfx] " Daniel Vetter
2021-08-17  8:49   ` [PATCH] " Daniel Vetter
2021-08-17  8:49     ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 02/20] drm/msm: Fix drm/sched point of no return rules Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 23:02   ` Rob Clark
2021-08-05 23:02     ` [Intel-gfx] " Rob Clark
2021-08-06 16:41     ` Daniel Vetter
2021-08-06 16:41       ` [Intel-gfx] " Daniel Vetter
2021-08-06 17:19       ` Rob Clark
2021-08-06 17:19         ` [Intel-gfx] " Rob Clark
2021-08-06 18:41         ` Daniel Vetter
2021-08-06 18:41           ` [Intel-gfx] " Daniel Vetter
2021-08-06 19:01           ` Rob Clark
2021-08-06 19:01             ` [Intel-gfx] " Rob Clark
2021-08-06 19:10             ` Daniel Vetter
2021-08-06 19:10               ` [Intel-gfx] " Daniel Vetter
2021-08-06 19:59               ` Rob Clark
2021-08-06 19:59                 ` [Intel-gfx] " Rob Clark
2021-08-17  8:53   ` [PATCH] drm/msm: Improve " Daniel Vetter
2021-08-17  8:53     ` [Intel-gfx] " Daniel Vetter
2021-08-26  9:33     ` Daniel Vetter
2021-08-26  9:33       ` [Intel-gfx] " Daniel Vetter
2021-08-26 15:38       ` Rob Clark
2021-08-26 15:38         ` Rob Clark
2021-08-05 10:46 ` [PATCH v5 03/20] drm/sched: Barriers are needed for entity->last_scheduled Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:45   ` Christian König
2021-08-05 13:45     ` [Intel-gfx] " Christian König
2021-08-05 10:46 ` [PATCH v5 04/20] drm/sched: Add dependency tracking Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:47   ` Christian König
2021-08-05 13:47     ` [Intel-gfx] " Christian König
2021-08-05 10:46 ` [PATCH v5 05/20] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:48   ` Christian König
2021-08-05 13:48     ` [Intel-gfx] " Christian König
2021-08-05 10:46 ` [PATCH v5 06/20] drm/sched: improve docs around drm_sched_entity Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 07/20] drm/panfrost: use scheduler dependency tracking Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 15:10   ` Alyssa Rosenzweig
2021-08-05 15:10     ` [Intel-gfx] " Alyssa Rosenzweig
2021-08-05 10:46 ` [PATCH v5 08/20] drm/lima: " Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-12 19:28   ` Daniel Vetter
2021-08-12 19:28     ` [Intel-gfx] " Daniel Vetter
2021-08-14  2:45     ` Qiang Yu
2021-08-14  2:45       ` [Intel-gfx] " Qiang Yu
2021-08-05 10:46 ` [PATCH v5 09/20] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 10/20] drm/v3d: Use scheduler dependency handling Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 11/20] drm/etnaviv: " Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-12 19:28   ` Daniel Vetter
2021-08-12 19:28     ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 12/20] drm/msm: " Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-12 19:29   ` Daniel Vetter
2021-08-12 19:29     ` [Intel-gfx] " Daniel Vetter
2021-08-26 16:12   ` Rob Clark
2021-08-26 16:12     ` [Intel-gfx] " Rob Clark
2021-08-30  9:01   ` Daniel Vetter
2021-08-30  9:01     ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 13/20] drm/gem: Delete gem array fencing helpers Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-12 19:29   ` Daniel Vetter
2021-08-12 19:29     ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:46 ` [PATCH v5 14/20] drm/sched: Don't store self-dependencies Daniel Vetter
2021-08-05 10:46   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:18   ` Christian König
2021-08-05 13:18     ` [Intel-gfx] " Christian König
2021-08-05 13:25     ` Daniel Vetter
2021-08-05 13:25       ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:57       ` Christian König [this message]
2021-08-05 13:57         ` Christian König
2021-08-05 15:06         ` Daniel Vetter
2021-08-05 15:06           ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 15/20] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:19   ` Christian König
2021-08-05 13:19     ` [Intel-gfx] " Christian König
2021-08-05 13:27     ` Daniel Vetter
2021-08-05 13:27       ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 16/20] drm/msm: Don't break exclusive fence ordering Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-26 16:16   ` Rob Clark
2021-08-26 16:16     ` [Intel-gfx] " Rob Clark
2021-08-26 16:16     ` Rob Clark
2021-08-30  9:02     ` Daniel Vetter
2021-08-30  9:02       ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 17/20] drm/etnaviv: " Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 18/20] drm/i915: delete exclude argument from i915_sw_fence_await_reservation Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 19/20] drm/i915: Don't break exclusive fence ordering Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-05 10:47 ` [PATCH v5 20/20] dma-resv: Give the docs a do-over Daniel Vetter
2021-08-05 10:47   ` [Intel-gfx] " Daniel Vetter
2021-08-30 19:38   ` Daniel Vetter
2021-08-30 19:38     ` [Intel-gfx] " Daniel Vetter
2021-08-05 13:58 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency handling and implicit sync fixes Patchwork
2021-08-05 14:29 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2021-08-06 19:14 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/sched dependency handling and implicit sync fixes (rev2) Patchwork
2021-08-17 16:27 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency handling and implicit sync fixes (rev4) Patchwork
2021-08-17 16:57 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-17 18:15 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-08-26 13:17 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency handling and implicit sync fixes (rev5) Patchwork
2021-08-26 13:48 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-08-26 21:46 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=32f5f17d-9c2b-c6e3-9809-4100bcadf21c@amd.com \
    --to=christian.koenig@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=andrey.grodzovsky@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=l.stach@pengutronix.de \
    --cc=luben.tuikov@amd.com \
    --cc=mwen@igalia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.