All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	DRI Development <dri-devel@lists.freedesktop.org>,
	Steven Price <steven.price@arm.com>,
	Boris Brezillon <boris.brezillon@collabora.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Lee Jones <lee.jones@linaro.org>
Subject: Re: [PATCH v4 02/18] drm/sched: Barriers are needed for entity->last_scheduled
Date: Tue, 13 Jul 2021 09:25:25 +0200	[thread overview]
Message-ID: <5c5ef6ba-49d0-36cc-b537-e6f9c354f6ac@amd.com> (raw)
In-Reply-To: <CAKMK7uE3dppw2=sVHRKx1b-ehVfiBphoJNJvpoPjt-=KsPp=Yw@mail.gmail.com>

Am 13.07.21 um 08:50 schrieb Daniel Vetter:
> On Tue, Jul 13, 2021 at 8:35 AM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 12.07.21 um 19:53 schrieb Daniel Vetter:
>>> It might be good enough on x86 with just READ_ONCE, but the write side
>>> should then at least be WRITE_ONCE because x86 has total store order.
>>>
>>> It's definitely not enough on arm.
>>>
>>> Fix this proplery, which means
>>> - explain the need for the barrier in both places
>>> - point at the other side in each comment
>>>
>>> Also pull out the !sched_list case as the first check, so that the
>>> code flow is clearer.
>>>
>>> While at it sprinkle some comments around because it was very
>>> non-obvious to me what's actually going on here and why.
>>>
>>> Note that we really need full barriers here, at first I thought
>>> store-release and load-acquire on ->last_scheduled would be enough,
>>> but we actually requiring ordering between that and the queue state.
>>>
>>> v2: Put smp_rmp() in the right place and fix up comment (Andrey)
>>>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: "Christian König" <christian.koenig@amd.com>
>>> Cc: Steven Price <steven.price@arm.com>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> Cc: Lee Jones <lee.jones@linaro.org>
>>> Cc: Boris Brezillon <boris.brezillon@collabora.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
>>>    1 file changed, 25 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index f7347c284886..89e3f6eaf519 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>>>                dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
>>>
>>>        dma_fence_put(entity->last_scheduled);
>>> +
>>>        entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
>>>
>>> +     /*
>>> +      * If the queue is empty we allow drm_sched_entity_select_rq() to
>>> +      * locklessly access ->last_scheduled. This only works if we set the
>>> +      * pointer before we dequeue and if we a write barrier here.
>>> +      */
>>> +     smp_wmb();
>>> +
>> Again, conceptual those barriers should be part of the spsc_queue
>> container and not externally.
> That would be extremely unusual api. Let's assume that your queue is
> very dumb, and protected by a simple lock. That's about the maximum
> any user could expect.
>
> But then you still need barriers here, because linux locks (spinlock,
> mutex) are defined to be one-way barriers: Stuff that's inside is
> guaranteed to be done insinde, but stuff outside of the locked region
> can leak in. They're load-acquire/store-release barriers. So not good
> enough.
>
> You really need to have barriers here, and they really all need to be
> documented properly. And yes that's a shit-ton of work in drm/sched,
> because it's full of yolo lockless stuff.
>
> The other case you could make is that this works like a wakeup queue,
> or similar. The rules there are:
> - wake_up (i.e. pushing something into the queue) is a store-release barrier
> - the waked up (i.e. popping an entry) is a load acquire barrier
> Which is obviuosly needed because otherwise you don't have coherency
> for the data queued up. And again not the barriers you're locking for
> here.

Exactly that was the idea, yes.

> Either way, we'd still need the comments, because it's still lockless
> trickery, and every single one of that needs to have a comment on both
> sides to explain what's going on.
>
> Essentially replace spsc_queue with an llist underneath, and that's
> the amount of barriers a data structure should provide. Anything else
> is asking your datastructure to paper over bugs in your users.
>
> This is similar to how atomic_t is by default completely unordered,
> and users need to add barriers as needed, with comments.

My main problem is as always that kernel atomics work different than 
userspace atomics.

> I think this is all to make sure people don't just write lockless algorithms
> because it's a cool idea, but are forced to think this all through.
> Which seems to not have happened very consistently for drm/sched, so I
> guess needs to be fixed.

Well at least initially that was all perfectly thought through. The 
problem is nobody is really maintaining that stuff.

> I'm definitely not going to hide all that by making the spsc_queue
> stuff provide random unjustified barriers just because that would
> paper over drm/sched bugs. We need to fix the actual bugs, and
> preferrable all of them. I've found a few, but I wasn't involved in
> drm/sched thus far, so best I can do is discover them as we go.

I don't think that those are random unjustified barriers at all and it 
sounds like you didn't grip what I said here.

See the spsc queue must have the following semantics:

1. When you pop a job all changes made before you push the job must be 
visible.

2. When the queue becomes empty all the changes made before you pop the 
last job must be visible.

Otherwise I completely agree with you that the whole scheduler doesn't 
work at all and we need to add tons of external barriers.

Regards,
Christian.

> -Daniel
>
>
>> Regards,
>> Christian.
>>
>>>        spsc_queue_pop(&entity->job_queue);
>>>        return sched_job;
>>>    }
>>> @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>>>        struct drm_gpu_scheduler *sched;
>>>        struct drm_sched_rq *rq;
>>>
>>> -     if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
>>> +     /* single possible engine and already selected */
>>> +     if (!entity->sched_list)
>>> +             return;
>>> +
>>> +     /* queue non-empty, stay on the same engine */
>>> +     if (spsc_queue_count(&entity->job_queue))
>>>                return;
>>>
>>> -     fence = READ_ONCE(entity->last_scheduled);
>>> +     /*
>>> +      * Only when the queue is empty are we guaranteed that the scheduler
>>> +      * thread cannot change ->last_scheduled. To enforce ordering we need
>>> +      * a read barrier here. See drm_sched_entity_pop_job() for the other
>>> +      * side.
>>> +      */
>>> +     smp_rmb();
>>> +
>>> +     fence = entity->last_scheduled;
>>> +
>>> +     /* stay on the same engine if the previous job hasn't finished */
>>>        if (fence && !dma_fence_is_signaled(fence))
>>>                return;
>>>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Ce06013b14cfc49e3e10f08d945ca8f73%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637617558577952913%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=yKAIrzyRRh1AoGM%2BQXyrwd4psTvyO%2Bcn8961PbcJmpQ%3D&amp;reserved=0


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	DRI Development <dri-devel@lists.freedesktop.org>,
	Steven Price <steven.price@arm.com>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Lee Jones <lee.jones@linaro.org>
Subject: Re: [Intel-gfx] [PATCH v4 02/18] drm/sched: Barriers are needed for entity->last_scheduled
Date: Tue, 13 Jul 2021 09:25:25 +0200	[thread overview]
Message-ID: <5c5ef6ba-49d0-36cc-b537-e6f9c354f6ac@amd.com> (raw)
In-Reply-To: <CAKMK7uE3dppw2=sVHRKx1b-ehVfiBphoJNJvpoPjt-=KsPp=Yw@mail.gmail.com>

Am 13.07.21 um 08:50 schrieb Daniel Vetter:
> On Tue, Jul 13, 2021 at 8:35 AM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 12.07.21 um 19:53 schrieb Daniel Vetter:
>>> It might be good enough on x86 with just READ_ONCE, but the write side
>>> should then at least be WRITE_ONCE because x86 has total store order.
>>>
>>> It's definitely not enough on arm.
>>>
>>> Fix this proplery, which means
>>> - explain the need for the barrier in both places
>>> - point at the other side in each comment
>>>
>>> Also pull out the !sched_list case as the first check, so that the
>>> code flow is clearer.
>>>
>>> While at it sprinkle some comments around because it was very
>>> non-obvious to me what's actually going on here and why.
>>>
>>> Note that we really need full barriers here, at first I thought
>>> store-release and load-acquire on ->last_scheduled would be enough,
>>> but we actually requiring ordering between that and the queue state.
>>>
>>> v2: Put smp_rmp() in the right place and fix up comment (Andrey)
>>>
>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>> Cc: "Christian König" <christian.koenig@amd.com>
>>> Cc: Steven Price <steven.price@arm.com>
>>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>> Cc: Lee Jones <lee.jones@linaro.org>
>>> Cc: Boris Brezillon <boris.brezillon@collabora.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_entity.c | 27 ++++++++++++++++++++++--
>>>    1 file changed, 25 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index f7347c284886..89e3f6eaf519 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -439,8 +439,16 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>>>                dma_fence_set_error(&sched_job->s_fence->finished, -ECANCELED);
>>>
>>>        dma_fence_put(entity->last_scheduled);
>>> +
>>>        entity->last_scheduled = dma_fence_get(&sched_job->s_fence->finished);
>>>
>>> +     /*
>>> +      * If the queue is empty we allow drm_sched_entity_select_rq() to
>>> +      * locklessly access ->last_scheduled. This only works if we set the
>>> +      * pointer before we dequeue and if we a write barrier here.
>>> +      */
>>> +     smp_wmb();
>>> +
>> Again, conceptual those barriers should be part of the spsc_queue
>> container and not externally.
> That would be extremely unusual api. Let's assume that your queue is
> very dumb, and protected by a simple lock. That's about the maximum
> any user could expect.
>
> But then you still need barriers here, because linux locks (spinlock,
> mutex) are defined to be one-way barriers: Stuff that's inside is
> guaranteed to be done insinde, but stuff outside of the locked region
> can leak in. They're load-acquire/store-release barriers. So not good
> enough.
>
> You really need to have barriers here, and they really all need to be
> documented properly. And yes that's a shit-ton of work in drm/sched,
> because it's full of yolo lockless stuff.
>
> The other case you could make is that this works like a wakeup queue,
> or similar. The rules there are:
> - wake_up (i.e. pushing something into the queue) is a store-release barrier
> - the waked up (i.e. popping an entry) is a load acquire barrier
> Which is obviuosly needed because otherwise you don't have coherency
> for the data queued up. And again not the barriers you're locking for
> here.

Exactly that was the idea, yes.

> Either way, we'd still need the comments, because it's still lockless
> trickery, and every single one of that needs to have a comment on both
> sides to explain what's going on.
>
> Essentially replace spsc_queue with an llist underneath, and that's
> the amount of barriers a data structure should provide. Anything else
> is asking your datastructure to paper over bugs in your users.
>
> This is similar to how atomic_t is by default completely unordered,
> and users need to add barriers as needed, with comments.

My main problem is as always that kernel atomics work different than 
userspace atomics.

> I think this is all to make sure people don't just write lockless algorithms
> because it's a cool idea, but are forced to think this all through.
> Which seems to not have happened very consistently for drm/sched, so I
> guess needs to be fixed.

Well at least initially that was all perfectly thought through. The 
problem is nobody is really maintaining that stuff.

> I'm definitely not going to hide all that by making the spsc_queue
> stuff provide random unjustified barriers just because that would
> paper over drm/sched bugs. We need to fix the actual bugs, and
> preferrable all of them. I've found a few, but I wasn't involved in
> drm/sched thus far, so best I can do is discover them as we go.

I don't think that those are random unjustified barriers at all and it 
sounds like you didn't grip what I said here.

See the spsc queue must have the following semantics:

1. When you pop a job all changes made before you push the job must be 
visible.

2. When the queue becomes empty all the changes made before you pop the 
last job must be visible.

Otherwise I completely agree with you that the whole scheduler doesn't 
work at all and we need to add tons of external barriers.

Regards,
Christian.

> -Daniel
>
>
>> Regards,
>> Christian.
>>
>>>        spsc_queue_pop(&entity->job_queue);
>>>        return sched_job;
>>>    }
>>> @@ -459,10 +467,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>>>        struct drm_gpu_scheduler *sched;
>>>        struct drm_sched_rq *rq;
>>>
>>> -     if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
>>> +     /* single possible engine and already selected */
>>> +     if (!entity->sched_list)
>>> +             return;
>>> +
>>> +     /* queue non-empty, stay on the same engine */
>>> +     if (spsc_queue_count(&entity->job_queue))
>>>                return;
>>>
>>> -     fence = READ_ONCE(entity->last_scheduled);
>>> +     /*
>>> +      * Only when the queue is empty are we guaranteed that the scheduler
>>> +      * thread cannot change ->last_scheduled. To enforce ordering we need
>>> +      * a read barrier here. See drm_sched_entity_pop_job() for the other
>>> +      * side.
>>> +      */
>>> +     smp_rmb();
>>> +
>>> +     fence = entity->last_scheduled;
>>> +
>>> +     /* stay on the same engine if the previous job hasn't finished */
>>>        if (fence && !dma_fence_is_signaled(fence))
>>>                return;
>>>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Ce06013b14cfc49e3e10f08d945ca8f73%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637617558577952913%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=yKAIrzyRRh1AoGM%2BQXyrwd4psTvyO%2Bcn8961PbcJmpQ%3D&amp;reserved=0

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-07-13  7:25 UTC|newest]

Thread overview: 112+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-12 17:53 [PATCH v4 00/18] drm/sched dependency tracking and dma-resv fixes Daniel Vetter
2021-07-12 17:53 ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 01/18] drm/sched: Split drm_sched_job_init Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 20:22   ` Emma Anholt
2021-07-12 20:22     ` [Intel-gfx] " Emma Anholt
2021-07-12 20:22     ` Emma Anholt
2021-07-13  6:40   ` Christian König
2021-07-13  6:40     ` [Intel-gfx] " Christian König
2021-07-13  6:40     ` Christian König
2021-07-13  6:53     ` Daniel Vetter
2021-07-13  6:53       ` [Intel-gfx] " Daniel Vetter
2021-07-13  6:53       ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 02/18] drm/sched: Barriers are needed for entity->last_scheduled Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-13  6:35   ` Christian König
2021-07-13  6:35     ` [Intel-gfx] " Christian König
2021-07-13  6:50     ` Daniel Vetter
2021-07-13  6:50       ` [Intel-gfx] " Daniel Vetter
2021-07-13  7:25       ` Christian König [this message]
2021-07-13  7:25         ` Christian König
2021-07-13  9:10         ` Daniel Vetter
2021-07-13  9:10           ` [Intel-gfx] " Daniel Vetter
2021-07-13 11:20           ` Christian König
2021-07-13 11:20             ` [Intel-gfx] " Christian König
2021-07-13 16:11           ` Andrey Grodzovsky
2021-07-13 16:11             ` [Intel-gfx] " Andrey Grodzovsky
2021-07-13 16:45             ` Daniel Vetter
2021-07-13 16:45               ` [Intel-gfx] " Daniel Vetter
2021-07-14 22:12               ` Andrey Grodzovsky
2021-07-14 22:12                 ` [Intel-gfx] " Andrey Grodzovsky
2021-07-15 10:16                 ` Daniel Vetter
2021-07-15 10:16                   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 03/18] drm/sched: Add dependency tracking Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-27 11:09   ` Daniel Vetter
2021-07-27 11:09     ` [Intel-gfx] " Daniel Vetter
2021-07-27 11:09     ` Daniel Vetter
2021-07-28 11:28     ` [Linaro-mm-sig] " Christian König
2021-07-28 11:28       ` [Intel-gfx] " Christian König
2021-07-28 11:28       ` Christian König
2021-07-28 12:09       ` Daniel Vetter
2021-07-28 12:09         ` [Intel-gfx] " Daniel Vetter
2021-07-28 12:09         ` Daniel Vetter
2021-07-28 12:46         ` Christian König
2021-07-28 12:46           ` [Intel-gfx] " Christian König
2021-07-28 12:46           ` Christian König
2021-07-28 15:20         ` Melissa Wen
2021-07-28 15:20           ` [Intel-gfx] " Melissa Wen
2021-07-28 15:20           ` Melissa Wen
2021-07-12 17:53 ` [PATCH v4 04/18] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 05/18] drm/sched: improve docs around drm_sched_entity Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 06/18] drm/panfrost: use scheduler dependency tracking Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 07/18] drm/lima: " Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 08/18] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-14  9:34   ` Melissa Wen
2021-07-14  9:34     ` [Intel-gfx] " Melissa Wen
2021-07-12 17:53 ` [PATCH v4 09/18] drm/v3d: Use scheduler dependency handling Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-14  9:37   ` Melissa Wen
2021-07-14  9:37     ` [Intel-gfx] " Melissa Wen
2021-07-12 17:53 ` [PATCH v4 10/18] drm/etnaviv: " Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 11/18] drm/gem: Delete gem array fencing helpers Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 12/18] drm/sched: Don't store self-dependencies Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 13/18] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 14/18] drm/msm: Don't break exclusive fence ordering Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-13 16:55   ` Rob Clark
2021-07-13 16:55     ` [Intel-gfx] " Rob Clark
2021-07-13 16:55     ` Rob Clark
2021-07-13 16:58     ` Daniel Vetter
2021-07-13 16:58       ` [Intel-gfx] " Daniel Vetter
2021-07-13 16:58       ` Daniel Vetter
2021-07-13 17:46       ` Rob Clark
2021-07-13 17:46         ` [Intel-gfx] " Rob Clark
2021-07-13 17:46         ` Rob Clark
2021-07-13 17:45         ` Daniel Vetter
2021-07-13 17:45           ` [Intel-gfx] " Daniel Vetter
2021-07-13 17:45           ` Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 15/18] drm/etnaviv: " Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 16/18] drm/i915: delete exclude argument from i915_sw_fence_await_reservation Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 17/18] drm/i915: Don't break exclusive fence ordering Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53 ` [PATCH v4 18/18] dma-resv: Give the docs a do-over Daniel Vetter
2021-07-12 17:53   ` [Intel-gfx] " Daniel Vetter
2021-07-12 17:53   ` Daniel Vetter
2021-07-12 20:47 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/sched dependency tracking and dma-resv fixes (rev3) Patchwork
2021-07-12 21:13 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-07-12 23:43 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-07-27 11:51 ` [PATCH v4 00/18] drm/sched dependency tracking and dma-resv fixes Boris Brezillon
2021-07-27 11:51   ` [Intel-gfx] " Boris Brezillon
2021-07-27 14:47 ` Melissa Wen
2021-07-27 14:47   ` [Intel-gfx] " Melissa Wen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c5ef6ba-49d0-36cc-b537-e6f9c354f6ac@amd.com \
    --to=christian.koenig@amd.com \
    --cc=boris.brezillon@collabora.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lee.jones@linaro.org \
    --cc=steven.price@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.