All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 6/7] drm/amdgpu: unwrap fence chains in the explicit sync fence
Date: Mon, 14 Jun 2021 09:25:44 +0200	[thread overview]
Message-ID: <1ee8f984-783a-7522-fe4f-1a9593004828@gmail.com> (raw)
In-Reply-To: <YMN+211wnBJJIj1I@phenom.ffwll.local>

Am 11.06.21 um 17:18 schrieb Daniel Vetter:
> On Fri, Jun 11, 2021 at 12:09:19PM +0200, Christian König wrote:
>> Am 11.06.21 um 11:07 schrieb Daniel Vetter:
>>> On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
>>>> Unwrap a the explicit fence if it is a dma_fence_chain and
>>>> sync to the first fence not matching the owner rules.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +++++++++++++----------
>>>>    1 file changed, 68 insertions(+), 50 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> index 1b2ceccaf5b0..862eb3c1c4c5 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> @@ -28,6 +28,8 @@
>>>>     *    Christian König <christian.koenig@amd.com>
>>>>     */
>>>> +#include <linux/dma-fence-chain.h>
>>>> +
>>>>    #include "amdgpu.h"
>>>>    #include "amdgpu_trace.h"
>>>>    #include "amdgpu_amdkfd.h"
>>>> @@ -186,6 +188,55 @@ int amdgpu_sync_vm_fence(struct amdgpu_sync *sync, struct dma_fence *fence)
>>>>    	return amdgpu_sync_fence(sync, fence);
>>>>    }
>>>> +/* Determine based on the owner and mode if we should sync to a fence or not */
>>>> +static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
>>>> +				   enum amdgpu_sync_mode mode,
>>>> +				   void *owner, struct dma_fence *f)
>>>> +{
>>>> +	void *fence_owner = amdgpu_sync_get_owner(f);
>>>> +
>>>> +	/* Always sync to moves, no matter what */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return true;
>>>> +
>>>> +	/* We only want to trigger KFD eviction fences on
>>>> +	 * evict or move jobs. Skip KFD fences otherwise.
>>>> +	 */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> +	    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return false;
>>>> +
>>>> +	/* Never sync to VM updates either. */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> +	    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return false;
>>>> +
>>>> +	/* Ignore fences depending on the sync mode */
>>>> +	switch (mode) {
>>>> +	case AMDGPU_SYNC_ALWAYS:
>>>> +		return true;
>>>> +
>>>> +	case AMDGPU_SYNC_NE_OWNER:
>>>> +		if (amdgpu_sync_same_dev(adev, f) &&
>>>> +		    fence_owner == owner)
>>>> +			return false;
>>>> +		break;
>>>> +
>>>> +	case AMDGPU_SYNC_EQ_OWNER:
>>>> +		if (amdgpu_sync_same_dev(adev, f) &&
>>>> +		    fence_owner != owner)
>>>> +			return false;
>>>> +		break;
>>>> +
>>>> +	case AMDGPU_SYNC_EXPLICIT:
>>>> +		return false;
>>>> +	}
>>>> +
>>>> +	WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> +	     "Adding eviction fence to sync obj");
>>>> +	return true;
>>>> +}
>>>> +
>>>>    /**
>>>>     * amdgpu_sync_resv - sync to a reservation object
>>>>     *
>>>> @@ -211,67 +262,34 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
>>>>    	/* always sync to the exclusive fence */
>>>>    	f = dma_resv_excl_fence(resv);
>>>> -	r = amdgpu_sync_fence(sync, f);
>>>> +	dma_fence_chain_for_each(f, f) {
>>> Jason has some helper for deep-walking fence chains/arrays here I think.
>>> Might want to look into that, so that we have some consistency in how we
>>> pile up multiple exclusive fences.
>> Well those helpers are not from Jason, but from me :)
>>
>> But no, for now the deep inspection is not really helpful here since
>> grabbing a reference to a certain chain node is what that makes the handling
>> easier and faster here.
>>
>> Thinking more about it that should also make it possible for the garbage
>> collection to kick in properly.
> Hm this is tricky to reason about, but yeah with this here it's a true
> chain, and you just need to connect them. But then if a buffer is on
> multiple engines, collapsing things down occasionally might be useful.
>
> But maybe we need to do that in the bigger rework where exclusive fences
> are also just in the dma_fence_list with a "this is an exclusive one btw"
> tag.
>
> I think for the vk import case doing the deep scan makes more sense, it's
> a once-per-frame thing, and there's a much bigger chance that you have a
> pile of fences from different engines on it already.

The problem with Jasons IOCTL is that you *must* do a deep dive and 
flatten out the fences.

Otherwise somebody could use it to create a deep fence structure with 
dma_fence_arrays containing dma_fence_arrays, containing 
dma_fence_arrays etc...

When you then release that structure you overwrite kernel stack because 
the dma_fence_array does a dma_fence_put() on it's entries :)

The dma_fence_chain container is intentionally made in a way to prevent 
that.

> I think a comment explaining why we think deep scan isn't a good idea here
> would be good, just so we can appreciate our foolishness when it all goes
> wrong :-)

Ok, good point.

Thanks,
Christian.

> -Daniel
>
>
>>> Anyway pretty much one of the versions I had in mind too, except I didn't
>>> type it up.
>>>
>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Thanks,
>> Christian.
>>
>>>> +		struct dma_fence_chain *chain = to_dma_fence_chain(f);
>>>> +
>>>> +		if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
>>>> +					   chain->fence : f)) {
>>>> +			r = amdgpu_sync_fence(sync, f);
>>>> +			dma_fence_put(f);
>>>> +			if (r)
>>>> +				return r;
>>>> +			break;
>>>> +		}
>>>> +	}
>>>>    	flist = dma_resv_shared_list(resv);
>>>> -	if (!flist || r)
>>>> -		return r;
>>>> +	if (!flist)
>>>> +		return 0;
>>>>    	for (i = 0; i < flist->shared_count; ++i) {
>>>> -		void *fence_owner;
>>>> -
>>>>    		f = rcu_dereference_protected(flist->shared[i],
>>>>    					      dma_resv_held(resv));
>>>> -		fence_owner = amdgpu_sync_get_owner(f);
>>>> -
>>>> -		/* Always sync to moves, no matter what */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED) {
>>>> +		if (amdgpu_sync_test_fence(adev, mode, owner, f)) {
>>>>    			r = amdgpu_sync_fence(sync, f);
>>>>    			if (r)
>>>> -				break;
>>>> -		}
>>>> -
>>>> -		/* We only want to trigger KFD eviction fences on
>>>> -		 * evict or move jobs. Skip KFD fences otherwise.
>>>> -		 */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> -		    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> -			continue;
>>>> -
>>>> -		/* Never sync to VM updates either. */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> -		    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> -			continue;
>>>> -
>>>> -		/* Ignore fences depending on the sync mode */
>>>> -		switch (mode) {
>>>> -		case AMDGPU_SYNC_ALWAYS:
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_NE_OWNER:
>>>> -			if (amdgpu_sync_same_dev(adev, f) &&
>>>> -			    fence_owner == owner)
>>>> -				continue;
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_EQ_OWNER:
>>>> -			if (amdgpu_sync_same_dev(adev, f) &&
>>>> -			    fence_owner != owner)
>>>> -				continue;
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_EXPLICIT:
>>>> -			continue;
>>>> +				return r;
>>>>    		}
>>>> -
>>>> -		WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> -		     "Adding eviction fence to sync obj");
>>>> -		r = amdgpu_sync_fence(sync, f);
>>>> -		if (r)
>>>> -			break;
>>>>    	}
>>>> -	return r;
>>>> +	return 0;
>>>>    }
>>>>    /**
>>>> -- 
>>>> 2.25.1
>>>>


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 6/7] drm/amdgpu: unwrap fence chains in the explicit sync fence
Date: Mon, 14 Jun 2021 09:25:44 +0200	[thread overview]
Message-ID: <1ee8f984-783a-7522-fe4f-1a9593004828@gmail.com> (raw)
In-Reply-To: <YMN+211wnBJJIj1I@phenom.ffwll.local>

Am 11.06.21 um 17:18 schrieb Daniel Vetter:
> On Fri, Jun 11, 2021 at 12:09:19PM +0200, Christian König wrote:
>> Am 11.06.21 um 11:07 schrieb Daniel Vetter:
>>> On Thu, Jun 10, 2021 at 11:17:59AM +0200, Christian König wrote:
>>>> Unwrap a the explicit fence if it is a dma_fence_chain and
>>>> sync to the first fence not matching the owner rules.
>>>>
>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +++++++++++++----------
>>>>    1 file changed, 68 insertions(+), 50 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> index 1b2ceccaf5b0..862eb3c1c4c5 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
>>>> @@ -28,6 +28,8 @@
>>>>     *    Christian König <christian.koenig@amd.com>
>>>>     */
>>>> +#include <linux/dma-fence-chain.h>
>>>> +
>>>>    #include "amdgpu.h"
>>>>    #include "amdgpu_trace.h"
>>>>    #include "amdgpu_amdkfd.h"
>>>> @@ -186,6 +188,55 @@ int amdgpu_sync_vm_fence(struct amdgpu_sync *sync, struct dma_fence *fence)
>>>>    	return amdgpu_sync_fence(sync, fence);
>>>>    }
>>>> +/* Determine based on the owner and mode if we should sync to a fence or not */
>>>> +static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
>>>> +				   enum amdgpu_sync_mode mode,
>>>> +				   void *owner, struct dma_fence *f)
>>>> +{
>>>> +	void *fence_owner = amdgpu_sync_get_owner(f);
>>>> +
>>>> +	/* Always sync to moves, no matter what */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return true;
>>>> +
>>>> +	/* We only want to trigger KFD eviction fences on
>>>> +	 * evict or move jobs. Skip KFD fences otherwise.
>>>> +	 */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> +	    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return false;
>>>> +
>>>> +	/* Never sync to VM updates either. */
>>>> +	if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> +	    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> +		return false;
>>>> +
>>>> +	/* Ignore fences depending on the sync mode */
>>>> +	switch (mode) {
>>>> +	case AMDGPU_SYNC_ALWAYS:
>>>> +		return true;
>>>> +
>>>> +	case AMDGPU_SYNC_NE_OWNER:
>>>> +		if (amdgpu_sync_same_dev(adev, f) &&
>>>> +		    fence_owner == owner)
>>>> +			return false;
>>>> +		break;
>>>> +
>>>> +	case AMDGPU_SYNC_EQ_OWNER:
>>>> +		if (amdgpu_sync_same_dev(adev, f) &&
>>>> +		    fence_owner != owner)
>>>> +			return false;
>>>> +		break;
>>>> +
>>>> +	case AMDGPU_SYNC_EXPLICIT:
>>>> +		return false;
>>>> +	}
>>>> +
>>>> +	WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> +	     "Adding eviction fence to sync obj");
>>>> +	return true;
>>>> +}
>>>> +
>>>>    /**
>>>>     * amdgpu_sync_resv - sync to a reservation object
>>>>     *
>>>> @@ -211,67 +262,34 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
>>>>    	/* always sync to the exclusive fence */
>>>>    	f = dma_resv_excl_fence(resv);
>>>> -	r = amdgpu_sync_fence(sync, f);
>>>> +	dma_fence_chain_for_each(f, f) {
>>> Jason has some helper for deep-walking fence chains/arrays here I think.
>>> Might want to look into that, so that we have some consistency in how we
>>> pile up multiple exclusive fences.
>> Well those helpers are not from Jason, but from me :)
>>
>> But no, for now the deep inspection is not really helpful here since
>> grabbing a reference to a certain chain node is what that makes the handling
>> easier and faster here.
>>
>> Thinking more about it that should also make it possible for the garbage
>> collection to kick in properly.
> Hm this is tricky to reason about, but yeah with this here it's a true
> chain, and you just need to connect them. But then if a buffer is on
> multiple engines, collapsing things down occasionally might be useful.
>
> But maybe we need to do that in the bigger rework where exclusive fences
> are also just in the dma_fence_list with a "this is an exclusive one btw"
> tag.
>
> I think for the vk import case doing the deep scan makes more sense, it's
> a once-per-frame thing, and there's a much bigger chance that you have a
> pile of fences from different engines on it already.

The problem with Jasons IOCTL is that you *must* do a deep dive and 
flatten out the fences.

Otherwise somebody could use it to create a deep fence structure with 
dma_fence_arrays containing dma_fence_arrays, containing 
dma_fence_arrays etc...

When you then release that structure you overwrite kernel stack because 
the dma_fence_array does a dma_fence_put() on it's entries :)

The dma_fence_chain container is intentionally made in a way to prevent 
that.

> I think a comment explaining why we think deep scan isn't a good idea here
> would be good, just so we can appreciate our foolishness when it all goes
> wrong :-)

Ok, good point.

Thanks,
Christian.

> -Daniel
>
>
>>> Anyway pretty much one of the versions I had in mind too, except I didn't
>>> type it up.
>>>
>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Thanks,
>> Christian.
>>
>>>> +		struct dma_fence_chain *chain = to_dma_fence_chain(f);
>>>> +
>>>> +		if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
>>>> +					   chain->fence : f)) {
>>>> +			r = amdgpu_sync_fence(sync, f);
>>>> +			dma_fence_put(f);
>>>> +			if (r)
>>>> +				return r;
>>>> +			break;
>>>> +		}
>>>> +	}
>>>>    	flist = dma_resv_shared_list(resv);
>>>> -	if (!flist || r)
>>>> -		return r;
>>>> +	if (!flist)
>>>> +		return 0;
>>>>    	for (i = 0; i < flist->shared_count; ++i) {
>>>> -		void *fence_owner;
>>>> -
>>>>    		f = rcu_dereference_protected(flist->shared[i],
>>>>    					      dma_resv_held(resv));
>>>> -		fence_owner = amdgpu_sync_get_owner(f);
>>>> -
>>>> -		/* Always sync to moves, no matter what */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED) {
>>>> +		if (amdgpu_sync_test_fence(adev, mode, owner, f)) {
>>>>    			r = amdgpu_sync_fence(sync, f);
>>>>    			if (r)
>>>> -				break;
>>>> -		}
>>>> -
>>>> -		/* We only want to trigger KFD eviction fences on
>>>> -		 * evict or move jobs. Skip KFD fences otherwise.
>>>> -		 */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
>>>> -		    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> -			continue;
>>>> -
>>>> -		/* Never sync to VM updates either. */
>>>> -		if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
>>>> -		    owner != AMDGPU_FENCE_OWNER_UNDEFINED)
>>>> -			continue;
>>>> -
>>>> -		/* Ignore fences depending on the sync mode */
>>>> -		switch (mode) {
>>>> -		case AMDGPU_SYNC_ALWAYS:
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_NE_OWNER:
>>>> -			if (amdgpu_sync_same_dev(adev, f) &&
>>>> -			    fence_owner == owner)
>>>> -				continue;
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_EQ_OWNER:
>>>> -			if (amdgpu_sync_same_dev(adev, f) &&
>>>> -			    fence_owner != owner)
>>>> -				continue;
>>>> -			break;
>>>> -
>>>> -		case AMDGPU_SYNC_EXPLICIT:
>>>> -			continue;
>>>> +				return r;
>>>>    		}
>>>> -
>>>> -		WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
>>>> -		     "Adding eviction fence to sync obj");
>>>> -		r = amdgpu_sync_fence(sync, f);
>>>> -		if (r)
>>>> -			break;
>>>>    	}
>>>> -	return r;
>>>> +	return 0;
>>>>    }
>>>>    /**
>>>> -- 
>>>> 2.25.1
>>>>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-06-14  7:25 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-10  9:17 Change how amdgpu stores fences in dma_resv objects Christian König
2021-06-10  9:17 ` Christian König
2021-06-10  9:17 ` [PATCH 1/7] dma-buf: some dma_fence_chain improvements Christian König
2021-06-10  9:17   ` Christian König
2021-06-11  7:49   ` Daniel Vetter
2021-06-11  7:49     ` Daniel Vetter
2021-06-10  9:17 ` [PATCH 2/7] dma-buf: add dma_fence_chain_alloc/free Christian König
2021-06-10  9:17   ` Christian König
2021-06-11  7:54   ` Daniel Vetter
2021-06-11  7:54     ` Daniel Vetter
2021-06-11 11:48     ` Christian König
2021-06-11 11:48       ` Christian König
2021-06-11 14:40       ` Daniel Vetter
2021-06-11 14:40         ` Daniel Vetter
2021-06-10  9:17 ` [PATCH 3/7] dma-buf: add dma_fence_chain_alloc/free self tests Christian König
2021-06-10  9:17   ` Christian König
2021-06-11  7:58   ` Daniel Vetter
2021-06-11  7:58     ` Daniel Vetter
2021-06-11 10:04     ` Christian König
2021-06-11 10:04       ` Christian König
2021-06-10  9:17 ` [PATCH 4/7] dma-buf: add dma_fence_chain_garbage_collect Christian König
2021-06-10  9:17   ` Christian König
2021-06-11  8:58   ` Daniel Vetter
2021-06-11  8:58     ` Daniel Vetter
2021-06-11 10:07     ` Christian König
2021-06-11 10:07       ` Christian König
2021-06-11 15:11       ` Daniel Vetter
2021-06-11 15:11         ` Daniel Vetter
2021-06-16 17:29   ` kernel test robot
2021-06-16 17:29     ` kernel test robot
2021-06-16 17:29     ` kernel test robot
2021-06-16 18:50   ` kernel test robot
2021-06-16 18:50     ` kernel test robot
2021-06-16 18:50     ` kernel test robot
2021-06-10  9:17 ` [PATCH 5/7] drm/syncobj: drop the manual garbage collection Christian König
2021-06-10  9:17   ` Christian König
2021-06-10  9:17 ` [PATCH 6/7] drm/amdgpu: unwrap fence chains in the explicit sync fence Christian König
2021-06-10  9:17   ` Christian König
2021-06-11  9:07   ` Daniel Vetter
2021-06-11  9:07     ` Daniel Vetter
2021-06-11 10:09     ` Christian König
2021-06-11 10:09       ` Christian König
2021-06-11 15:18       ` Daniel Vetter
2021-06-11 15:18         ` Daniel Vetter
2021-06-14  7:25         ` Christian König [this message]
2021-06-14  7:25           ` Christian König
2021-06-17 16:51           ` Daniel Vetter
2021-06-17 16:51             ` Daniel Vetter
2021-06-10  9:18 ` [PATCH 7/7] drm/amdgpu: rework dma_resv handling Christian König
2021-06-10  9:18   ` Christian König
2021-06-11  9:17   ` Daniel Vetter
2021-06-11  9:17     ` Daniel Vetter
2021-06-11 10:12     ` Christian König
2021-06-11 10:12       ` Christian König
2021-06-11 15:21       ` Daniel Vetter
2021-06-11 15:21         ` Daniel Vetter
2021-06-10 16:34 ` Change how amdgpu stores fences in dma_resv objects Michel Dänzer
2021-06-10 16:34   ` Michel Dänzer
2021-06-10 16:44   ` Christian König
2021-06-10 16:44     ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1ee8f984-783a-7522-fe4f-1a9593004828@gmail.com \
    --to=ckoenig.leichtzumerken@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.