All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: "Daniel Vetter" <daniel@ffwll.ch>,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	"Daniel Stone" <daniels@collabora.com>,
	"Michel Dänzer" <michel@daenzer.net>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Kevin Wang" <kevin1.wang@amd.com>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Kristian H . Kristensen" <hoegsberg@google.com>,
	"Chen Li" <chenli@uniontech.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	mesa-dev <mesa-dev@lists.freedesktop.org>,
	"Dennis Li" <Dennis.Li@amd.com>,
	"Deepak R Varma" <mh12gx2825@gmail.com>
Subject: Re: [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules
Date: Sat, 22 May 2021 10:30:19 +0200	[thread overview]
Message-ID: <ae13093e-c364-7b90-1f91-39de42594cd6@amd.com> (raw)
In-Reply-To: <CAKMK7uFswfc96hS40uc0Lug=doYAcf-yC-eu96iWqNJnM65MJQ@mail.gmail.com>

Am 21.05.21 um 20:31 schrieb Daniel Vetter:
> [SNIP]
>> We could provide an IOCTL for the BO to change the flag.
> That's not the semantics we need.
>
>> But could we first figure out the semantics we want to use here?
>>
>> Cause I'm pretty sure we don't actually need those changes at all and as
>> said before I'm certainly NAKing things which break existing use cases.
> Please read how other drivers do this and at least _try_ to understand
> it. I'm really loosing my patience here with you NAKing patches you're
> not even understanding (or did you actually read and fully understand
> the entire story I typed up here, and your NAK is on the entire
> thing?). There's not much useful conversation to be had with that
> approach. And with drivers I mean kernel + userspace here.

Well to be honest I did fully read that, but I was just to emotionally 
attached to answer more appropriately in that moment.

And I'm sorry that I react emotional on that, but it is really 
frustrating that I'm not able to convince you that we have a major 
problem which affects all drivers and not just amdgpu.

Regarding the reason why I'm NAKing this particular patch, you are 
breaking existing uAPI for RADV with that. And as a maintainer of the 
driver I have simply no other choice than saying halt, stop we can't do 
it like this.

I'm perfectly aware that I've some holes in the understanding of how ANV 
or other Vulkan/OpenGL stacks work. But you should probably also admit 
that you have some holes how amdgpu works or otherwise I can't imagine 
why you suggest a patch which simply breaks RADV.

I mean we are working together for years now and I think you know me 
pretty well, do you really think I scream bloody hell we can't do this 
without a good reason?

So let's stop throwing halve backed solutions at each other and discuss 
what we can do to solve the different problems we are both seeing here.

> That's the other frustration part: You're trying to fix this purely in
> the kernel. This is exactly one of these issues why we require open
> source userspace, so that we can fix the issues correctly across the
> entire stack. And meanwhile you're steadfastily refusing to even look
> at that the userspace side of the picture.

Well I do fully understand the userspace side of the picture for the AMD 
stack. I just don't think we should give userspace that much control 
over the fences in the dma_resv object without untangling them from 
resource management.

And RADV is exercising exclusive sync for amdgpu already. You can do 
submission to both the GFX, Compute and SDMA queues in Vulkan and those 
currently won't over-synchronize.

When you then send a texture generated by multiple engines to the 
Compositor the kernel will correctly inserts waits for all submissions 
of the other process.

So this already works for RADV and completely without the IOCTL Jason 
proposed. IIRC we also have unit tests which exercised that feature for 
the video decoding use case long before RADV even existed.

And yes I have to admit that I haven't thought about interaction with 
other drivers when I came up with this because the rules of that 
interaction wasn't clear to me at that time.

> Also I thought through your tlb issue, why are you even putting these
> tlb flush fences into the shard dma_resv slots? If you store them
> somewhere else in the amdgpu private part, the oversync issues goes
> away
> - in your ttm bo move callback, you can just make your bo copy job
> depend on them too (you have to anyway)
> - even for p2p there's not an issue here, because you have the
> ->move_notify callback, and can then lift the tlb flush fences from
> your private place to the shared slots so the exporter can see them.

Because adding a shared fence requires that this shared fence signals 
after the exclusive fence. And this is a perfect example to explain why 
this is so problematic and also why why we currently stumble over that 
only in amdgpu.

In TTM we have a feature which allows evictions to be pipelined and 
don't wait for the evicting DMA operation. Without that driver will 
stall waiting for their allocations to finish when we need to allocate 
memory.

For certain use cases this gives you a ~20% fps increase under memory 
pressure, so it is a really important feature.

This works by adding the fence of the last eviction DMA operation to BOs 
when their backing store is newly allocated. That's what the 
ttm_bo_add_move_fence() function you stumbled over is good for: 
https://elixir.bootlin.com/linux/v5.13-rc2/source/drivers/gpu/drm/ttm/ttm_bo.c#L692

Now the problem is it is possible that the application is terminated 
before it can complete it's command submission. But since resource 
management only waits for the shared fences when there are some there is 
a chance that we free up memory while it is still in use.

Because of this we have some rather crude workarounds in amdgpu. For 
example IIRC we manual wait for any potential exclusive fence before 
freeing memory.

We could enable this feature for radeon and nouveau as well with an one 
line change. But that would mean we need to maintain the workarounds for 
shortcomings of the dma_resv object design in those drivers as well.

To summarize I think that adding an unbound fence to protect an object 
is a perfectly valid operation for resource management, but this is 
restricted by the needs of implicit sync at the moment.

> The kernel move fences otoh are a bit more nasty to wring through the
> p2p dma-buf interface. That one probably needs something new.

Well the p2p interface are my least concern.

Adding the move fence means that you need to touch every place we do CS 
or page flip since you now have something which is parallel to the 
explicit sync fence.

Otherwise having the move fence separately wouldn't make much sense in 
the first place if we always set it together with the exclusive fence.

Best regards and sorry for getting on your nerves so much,
Christian.

> -Daniel
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>>>
>>>
>>>>> Are you bored enough to type this up for radv? I'll give Jason's kernel
>>>>> stuff another review meanwhile.
>>>>> -Daniel
>>>>>
>>>>>>>                   e->bo_va = amdgpu_vm_bo_find(vm, bo);
>>>>>>>           }
>>>>>>> --
>>>>>>> 2.31.0
>>>>>>>
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf0852f38c85046ca877908d91c86a719%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637572186953277692%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Vgz%2FkXFH4CD6ktZBnxnXFhHTG5tHhN1%2BDyf7pmxak6c%3D&amp;reserved=0
>


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: "Daniel Vetter" <daniel@ffwll.ch>,
	"Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	"Daniel Stone" <daniels@collabora.com>,
	"Michel Dänzer" <michel@daenzer.net>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Kevin Wang" <kevin1.wang@amd.com>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Kristian H . Kristensen" <hoegsberg@google.com>,
	"Chen Li" <chenli@uniontech.com>,
	"Bas Nieuwenhuizen" <bas@basnieuwenhuizen.nl>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	mesa-dev <mesa-dev@lists.freedesktop.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Dennis Li" <Dennis.Li@amd.com>,
	"Deepak R Varma" <mh12gx2825@gmail.com>
Subject: Re: [Intel-gfx] [Mesa-dev] [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules
Date: Sat, 22 May 2021 10:30:19 +0200	[thread overview]
Message-ID: <ae13093e-c364-7b90-1f91-39de42594cd6@amd.com> (raw)
In-Reply-To: <CAKMK7uFswfc96hS40uc0Lug=doYAcf-yC-eu96iWqNJnM65MJQ@mail.gmail.com>

Am 21.05.21 um 20:31 schrieb Daniel Vetter:
> [SNIP]
>> We could provide an IOCTL for the BO to change the flag.
> That's not the semantics we need.
>
>> But could we first figure out the semantics we want to use here?
>>
>> Cause I'm pretty sure we don't actually need those changes at all and as
>> said before I'm certainly NAKing things which break existing use cases.
> Please read how other drivers do this and at least _try_ to understand
> it. I'm really loosing my patience here with you NAKing patches you're
> not even understanding (or did you actually read and fully understand
> the entire story I typed up here, and your NAK is on the entire
> thing?). There's not much useful conversation to be had with that
> approach. And with drivers I mean kernel + userspace here.

Well to be honest I did fully read that, but I was just to emotionally 
attached to answer more appropriately in that moment.

And I'm sorry that I react emotional on that, but it is really 
frustrating that I'm not able to convince you that we have a major 
problem which affects all drivers and not just amdgpu.

Regarding the reason why I'm NAKing this particular patch, you are 
breaking existing uAPI for RADV with that. And as a maintainer of the 
driver I have simply no other choice than saying halt, stop we can't do 
it like this.

I'm perfectly aware that I've some holes in the understanding of how ANV 
or other Vulkan/OpenGL stacks work. But you should probably also admit 
that you have some holes how amdgpu works or otherwise I can't imagine 
why you suggest a patch which simply breaks RADV.

I mean we are working together for years now and I think you know me 
pretty well, do you really think I scream bloody hell we can't do this 
without a good reason?

So let's stop throwing halve backed solutions at each other and discuss 
what we can do to solve the different problems we are both seeing here.

> That's the other frustration part: You're trying to fix this purely in
> the kernel. This is exactly one of these issues why we require open
> source userspace, so that we can fix the issues correctly across the
> entire stack. And meanwhile you're steadfastily refusing to even look
> at that the userspace side of the picture.

Well I do fully understand the userspace side of the picture for the AMD 
stack. I just don't think we should give userspace that much control 
over the fences in the dma_resv object without untangling them from 
resource management.

And RADV is exercising exclusive sync for amdgpu already. You can do 
submission to both the GFX, Compute and SDMA queues in Vulkan and those 
currently won't over-synchronize.

When you then send a texture generated by multiple engines to the 
Compositor the kernel will correctly inserts waits for all submissions 
of the other process.

So this already works for RADV and completely without the IOCTL Jason 
proposed. IIRC we also have unit tests which exercised that feature for 
the video decoding use case long before RADV even existed.

And yes I have to admit that I haven't thought about interaction with 
other drivers when I came up with this because the rules of that 
interaction wasn't clear to me at that time.

> Also I thought through your tlb issue, why are you even putting these
> tlb flush fences into the shard dma_resv slots? If you store them
> somewhere else in the amdgpu private part, the oversync issues goes
> away
> - in your ttm bo move callback, you can just make your bo copy job
> depend on them too (you have to anyway)
> - even for p2p there's not an issue here, because you have the
> ->move_notify callback, and can then lift the tlb flush fences from
> your private place to the shared slots so the exporter can see them.

Because adding a shared fence requires that this shared fence signals 
after the exclusive fence. And this is a perfect example to explain why 
this is so problematic and also why why we currently stumble over that 
only in amdgpu.

In TTM we have a feature which allows evictions to be pipelined and 
don't wait for the evicting DMA operation. Without that driver will 
stall waiting for their allocations to finish when we need to allocate 
memory.

For certain use cases this gives you a ~20% fps increase under memory 
pressure, so it is a really important feature.

This works by adding the fence of the last eviction DMA operation to BOs 
when their backing store is newly allocated. That's what the 
ttm_bo_add_move_fence() function you stumbled over is good for: 
https://elixir.bootlin.com/linux/v5.13-rc2/source/drivers/gpu/drm/ttm/ttm_bo.c#L692

Now the problem is it is possible that the application is terminated 
before it can complete it's command submission. But since resource 
management only waits for the shared fences when there are some there is 
a chance that we free up memory while it is still in use.

Because of this we have some rather crude workarounds in amdgpu. For 
example IIRC we manual wait for any potential exclusive fence before 
freeing memory.

We could enable this feature for radeon and nouveau as well with an one 
line change. But that would mean we need to maintain the workarounds for 
shortcomings of the dma_resv object design in those drivers as well.

To summarize I think that adding an unbound fence to protect an object 
is a perfectly valid operation for resource management, but this is 
restricted by the needs of implicit sync at the moment.

> The kernel move fences otoh are a bit more nasty to wring through the
> p2p dma-buf interface. That one probably needs something new.

Well the p2p interface are my least concern.

Adding the move fence means that you need to touch every place we do CS 
or page flip since you now have something which is parallel to the 
explicit sync fence.

Otherwise having the move fence separately wouldn't make much sense in 
the first place if we always set it together with the exclusive fence.

Best regards and sorry for getting on your nerves so much,
Christian.

> -Daniel
>
>> Regards,
>> Christian.
>>
>>> -Daniel
>>>
>>>
>>>
>>>>> Are you bored enough to type this up for radv? I'll give Jason's kernel
>>>>> stuff another review meanwhile.
>>>>> -Daniel
>>>>>
>>>>>>>                   e->bo_va = amdgpu_vm_bo_find(vm, bo);
>>>>>>>           }
>>>>>>> --
>>>>>>> 2.31.0
>>>>>>>
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cf0852f38c85046ca877908d91c86a719%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637572186953277692%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Vgz%2FkXFH4CD6ktZBnxnXFhHTG5tHhN1%2BDyf7pmxak6c%3D&amp;reserved=0
>

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-05-22  8:30 UTC|newest]

Thread overview: 175+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-21  9:09 [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules Daniel Vetter
2021-05-21  9:09 ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09 ` [PATCH 02/11] drm/panfrost: Remove sched_lock Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:32   ` Lucas Stach
2021-05-21  9:32     ` [Intel-gfx] " Lucas Stach
2021-05-21 14:49     ` Daniel Vetter
2021-05-21 14:49       ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09 ` [PATCH 03/11] drm/panfrost: Use xarray and helpers for depedency tracking Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-06-02 14:06   ` Steven Price
2021-06-02 14:06     ` [Intel-gfx] " Steven Price
2021-06-02 14:06     ` Steven Price
2021-06-02 18:51     ` Daniel Vetter
2021-06-02 18:51       ` [Intel-gfx] " Daniel Vetter
2021-06-02 18:51       ` Daniel Vetter
2021-06-03  7:48       ` Steven Price
2021-06-03  7:48         ` [Intel-gfx] " Steven Price
2021-06-03  7:48         ` Steven Price
2021-05-21  9:09 ` [PATCH 04/11] drm/panfrost: Fix implicit sync Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21 12:22   ` Daniel Stone
2021-05-21 12:22     ` [Intel-gfx] " Daniel Stone
2021-05-21 12:22     ` Daniel Stone
2021-05-21 12:28     ` [Linaro-mm-sig] " Christian König
2021-05-21 12:28       ` [Intel-gfx] " Christian König
2021-05-21 12:54       ` Daniel Stone
2021-05-21 12:54         ` [Intel-gfx] " Daniel Stone
2021-05-21 12:54         ` Daniel Stone
2021-05-21 13:09         ` Christian König
2021-05-21 13:09           ` [Intel-gfx] " Christian König
2021-05-21 13:09           ` Christian König
2021-05-21 13:23           ` Daniel Stone
2021-05-21 13:23             ` [Intel-gfx] " Daniel Stone
2021-05-21 13:23             ` Daniel Stone
2021-05-21  9:09 ` [PATCH 05/11] drm/atomic-helper: make drm_gem_plane_helper_prepare_fb the default Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09 ` [PATCH 06/11] drm/<driver>: drm_gem_plane_helper_prepare_fb is now " Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:38   ` Lucas Stach
2021-05-21  9:38     ` Lucas Stach
2021-05-21  9:38     ` [Intel-gfx] " Lucas Stach
2021-05-21  9:38     ` Lucas Stach
2021-05-21  9:38     ` Lucas Stach
2021-05-21  9:38     ` Lucas Stach
2021-05-21  9:38     ` Lucas Stach
2021-05-21 12:20   ` Heiko Stübner
2021-05-21 12:20     ` Heiko Stübner
2021-05-21 12:20     ` [Intel-gfx] " Heiko Stübner
2021-05-21 12:20     ` Heiko Stübner
2021-05-21 12:20     ` Heiko Stübner
2021-05-21 12:20     ` Heiko Stübner
2021-05-21 12:20     ` Heiko Stübner
2021-05-21 12:22   ` Paul Cercueil
2021-05-21 12:22     ` Paul Cercueil
2021-05-21 12:22     ` [Intel-gfx] " Paul Cercueil
2021-05-21 12:22     ` Paul Cercueil
2021-05-21 12:22     ` Paul Cercueil
2021-05-21 12:22     ` Paul Cercueil
2021-05-21 12:22     ` Paul Cercueil
2021-05-21 15:53   ` Jernej Škrabec
2021-05-21 15:53     ` Jernej Škrabec
2021-05-21 15:53     ` [Intel-gfx] " Jernej Škrabec
2021-05-21 15:53     ` Jernej Škrabec
2021-05-21 15:53     ` Jernej Škrabec
2021-05-21 15:53     ` Jernej Škrabec
2021-05-21 15:53     ` Jernej Škrabec
2021-05-21 23:18   ` Chun-Kuang Hu
2021-05-21 23:18     ` Chun-Kuang Hu
2021-05-21 23:18     ` [Intel-gfx] " Chun-Kuang Hu
2021-05-21 23:18     ` Chun-Kuang Hu
2021-05-21 23:18     ` Chun-Kuang Hu
2021-05-21 23:18     ` Chun-Kuang Hu
2021-05-21 23:18     ` Chun-Kuang Hu
2021-05-23 12:17   ` Martin Blumenstingl
2021-05-23 12:17     ` Martin Blumenstingl
2021-05-23 12:17     ` [Intel-gfx] " Martin Blumenstingl
2021-05-23 12:17     ` Martin Blumenstingl
2021-05-23 12:17     ` Martin Blumenstingl
2021-05-23 12:17     ` Martin Blumenstingl
2021-05-23 12:17     ` Martin Blumenstingl
2021-05-24  7:54   ` Tomi Valkeinen
2021-05-24  7:54     ` Tomi Valkeinen
2021-05-24  7:54     ` [Intel-gfx] " Tomi Valkeinen
2021-05-24  7:54     ` Tomi Valkeinen
2021-05-24  7:54     ` Tomi Valkeinen
2021-05-24  7:54     ` Tomi Valkeinen
2021-05-24  7:54     ` Tomi Valkeinen
2021-05-28  9:55   ` Philippe CORNU
2021-05-28  9:55     ` Philippe CORNU
2021-05-28  9:55     ` [Intel-gfx] " Philippe CORNU
2021-05-28  9:55     ` Philippe CORNU
2021-05-28  9:55     ` Philippe CORNU
2021-05-28  9:55     ` Philippe CORNU
2021-05-28  9:55     ` Philippe CORNU
2021-05-21  9:09 ` [PATCH 07/11] drm/armada: Remove prepare/cleanup_fb hooks Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09 ` [PATCH 08/11] drm/vram-helpers: Create DRM_GEM_VRAM_PLANE_HELPER_FUNCS Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:33   ` tiantao (H)
2021-05-21  9:33     ` [Intel-gfx] " tiantao (H)
2021-05-21  9:09 ` [PATCH 09/11] drm/omap: Follow implicit fencing in prepare_fb Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-24  7:53   ` Tomi Valkeinen
2021-05-24  7:53     ` [Intel-gfx] " Tomi Valkeinen
2021-05-21  9:09 ` [PATCH 10/11] drm/simple-helper: drm_gem_simple_display_pipe_prepare_fb as default Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-25 17:48   ` Noralf Trønnes
2021-05-25 17:48     ` [Intel-gfx] " Noralf Trønnes
2021-05-25 17:53     ` Daniel Vetter
2021-05-25 17:53       ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09 ` [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is the default Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21  9:09   ` [Intel-gfx] " Daniel Vetter
2021-05-21  9:09   ` Daniel Vetter
2021-05-21 13:41   ` David Lechner
2021-05-21 13:41     ` David Lechner
2021-05-21 13:41     ` [Intel-gfx] " David Lechner
2021-05-21 13:41     ` David Lechner
2021-05-21 14:09   ` Noralf Trønnes
2021-05-21 14:09     ` Noralf Trønnes
2021-05-21 14:09     ` [Intel-gfx] " Noralf Trønnes
2021-05-21 14:09     ` Noralf Trønnes
2021-05-25 16:05     ` Daniel Vetter
2021-05-25 16:05       ` Daniel Vetter
2021-05-25 16:05       ` [Intel-gfx] " Daniel Vetter
2021-05-25 16:05       ` Daniel Vetter
2021-05-21 14:13   ` Oleksandr Andrushchenko
2021-05-21 14:13     ` Oleksandr Andrushchenko
2021-05-21 14:13     ` [Intel-gfx] " Oleksandr Andrushchenko
2021-05-21 14:13     ` Oleksandr Andrushchenko
2021-05-28  0:38   ` Linus Walleij
2021-05-28  0:38     ` Linus Walleij
2021-05-28  0:38     ` [Intel-gfx] " Linus Walleij
2021-05-28  0:38     ` Linus Walleij
2021-05-21  9:18 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/11] drm/amdgpu: Comply with implicit fencing rules Patchwork
2021-05-21  9:20 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-05-21  9:45 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-05-21  9:46 ` [PATCH 01/11] " Bas Nieuwenhuizen
2021-05-21  9:46   ` [Intel-gfx] " Bas Nieuwenhuizen
2021-05-21 14:37   ` Daniel Vetter
2021-05-21 14:37     ` [Intel-gfx] " Daniel Vetter
2021-05-21 15:00     ` Bas Nieuwenhuizen
2021-05-21 15:00       ` [Intel-gfx] " Bas Nieuwenhuizen
2021-05-21 15:16       ` Daniel Vetter
2021-05-21 15:16         ` [Intel-gfx] " Daniel Vetter
2021-05-21 18:08         ` [Mesa-dev] " Christian König
2021-05-21 18:08           ` [Intel-gfx] " Christian König
2021-05-21 18:31           ` Daniel Vetter
2021-05-21 18:31             ` [Intel-gfx] " Daniel Vetter
2021-05-22  8:30             ` Christian König [this message]
2021-05-22  8:30               ` Christian König
2021-05-25 13:05               ` Daniel Vetter
2021-05-25 13:05                 ` [Intel-gfx] " Daniel Vetter
2021-05-25 15:05                 ` Christian König
2021-05-25 15:23                   ` Daniel Vetter
2021-05-25 15:23                     ` [Intel-gfx] " Daniel Vetter
2021-05-26 13:32                     ` Christian König
2021-05-26 13:32                       ` [Intel-gfx] " Christian König
2021-05-26 13:51                       ` Daniel Vetter
2021-05-26 13:51                         ` [Intel-gfx] " Daniel Vetter
2021-05-21 11:22 ` Christian König
2021-05-21 11:22   ` [Intel-gfx] " Christian König
2021-05-21 14:58 ` [Mesa-dev] " Rob Clark
2021-05-21 14:58   ` [Intel-gfx] " Rob Clark
2021-05-21 14:58   ` Daniel Vetter
2021-05-21 14:58     ` [Intel-gfx] " Daniel Vetter
2021-05-23  5:00 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for series starting with [01/11] " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae13093e-c364-7b90-1f91-39de42594cd6@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Dennis.Li@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=chenli@uniontech.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel.vetter@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=daniels@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hoegsberg@google.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=kevin1.wang@amd.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=luben.tuikov@amd.com \
    --cc=mesa-dev@lists.freedesktop.org \
    --cc=mh12gx2825@gmail.com \
    --cc=michel@daenzer.net \
    --cc=robdclark@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.