All of lore.kernel.org
 help / color / mirror / Atom feed
From: Felix Kuehling <felix.kuehling@amd.com>
To: "Zeng, Oak" <Oak.Zeng@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v2 04/10] drm/amdgpu: Simplify AQL queue mapping
Date: Fri, 23 Apr 2021 03:23:20 -0400	[thread overview]
Message-ID: <d972007a-8108-0920-cbb2-d12798a357f9@amd.com> (raw)
In-Reply-To: <73C24A9C-4121-4A9F-A2E3-38B90D2179AC@amd.com>

Am 2021-04-22 um 9:33 p.m. schrieb Zeng, Oak:
> Regards,
> Oak 
>
>  
>
> On 2021-04-21, 9:31 PM, "amd-gfx on behalf of Felix Kuehling" <amd-gfx-bounces@lists.freedesktop.org on behalf of Felix.Kuehling@amd.com> wrote:
>
>     Do AQL queue double-mapping with a single attach call. That will make it
>     easier to create per-GPU BOs later, to be shared between the two BO VA
>     mappings on the same GPU.
>
>     Freeing the attachments is not necessary if map_to_gpu fails. These will be
>     cleaned up when the kdg_mem object is destroyed in
>     amdgpu_amdkfd_gpuvm_free_memory_of_gpu.
>
>     Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>     ---
>      .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 103 ++++++++----------
>      1 file changed, 48 insertions(+), 55 deletions(-)
>
>     diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     index 34c9a2d0028e..fbd7e786b54e 100644
>     --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     @@ -486,70 +486,76 @@ static uint64_t get_pte_flags(struct amdgpu_device *adev, struct kgd_mem *mem)
>       * 4a.  Validate new page tables and directories
>       */
>      static int kfd_mem_attach(struct amdgpu_device *adev, struct kgd_mem *mem,
>     -		struct amdgpu_vm *vm, bool is_aql,
>     -		struct kfd_mem_attachment **p_attachment)
>     +		struct amdgpu_vm *vm, bool is_aql)
>      {
>      	unsigned long bo_size = mem->bo->tbo.base.size;
>      	uint64_t va = mem->va;
>     -	struct kfd_mem_attachment *attachment;
>     -	struct amdgpu_bo *bo;
>     -	int ret;
>     +	struct kfd_mem_attachment *attachment[2] = {NULL, NULL};
>     +	struct amdgpu_bo *bo[2] = {NULL, NULL};
>     +	int i, ret;
>
>      	if (!va) {
>      		pr_err("Invalid VA when adding BO to VM\n");
>      		return -EINVAL;
>      	}
>
>     -	if (is_aql)
>     -		va += bo_size;
>     -
>     -	attachment = kzalloc(sizeof(*attachment), GFP_KERNEL);
>     -	if (!attachment)
>     -		return -ENOMEM;
>     +	for (i = 0; i <= is_aql; i++) {
>     +		attachment[i] = kzalloc(sizeof(*attachment[i]), GFP_KERNEL);
>     +		if (unlikely(!attachment[i])) {
>     +			ret = -ENOMEM;
>     +			goto unwind;
>     +		}
>
>     -	pr_debug("\t add VA 0x%llx - 0x%llx to vm %p\n", va,
>     -			va + bo_size, vm);
>     +		pr_debug("\t add VA 0x%llx - 0x%llx to vm %p\n", va,
>     +			 va + bo_size, vm);
>
>     -	/* FIXME: For now all attachments use the same BO. This is incorrect
>     -	 * because one BO can only have one DMA mapping for one GPU. We need
>     -	 * one BO per GPU, e.g. a DMABuf import with dynamic attachment. This
>     -	 * will be addressed one BO-type at a time in subsequent patches.
>     -	 */
>     -	bo = mem->bo;
>     -	drm_gem_object_get(&bo->tbo.base);
>     +		/* FIXME: For now all attachments use the same BO. This is
>     +		 * incorrect because one BO can only have one DMA mapping
>     +		 * for one GPU. We need one BO per GPU, e.g. a DMABuf
>     +		 * import with dynamic attachment. This will be addressed
>     +		 * one BO-type at a time in subsequent patches.
>     +		 */
>     +		bo[i] = mem->bo;
>     +		drm_gem_object_get(&bo[i]->tbo.base);
>
>     -	/* Add BO to VM internal data structures*/
>     -	attachment->bo_va = amdgpu_vm_bo_add(adev, vm, bo);
>     -	if (!attachment->bo_va) {
>     -		ret = -EINVAL;
>     -		pr_err("Failed to add BO object to VM. ret == %d\n",
>     -				ret);
>     -		goto err_vmadd;
>     -	}
>     +		/* Add BO to VM internal data structures */
>     +		attachment[i]->bo_va = amdgpu_vm_bo_add(adev, vm, bo[i]);
> Just for discussion. Are we allowed to add one bo twice to a vm? When I looked at amdgpu_vm_bo_base_init (called by amdgpu_vm_bo_add), line:
> bo->vm_bo = base;
> when you add the same bo to vm the second time, bo->vm_bo will be overwritten. I am not sure whether this will cause an issue later.
> This is not introduced by your code. The original code (calling kfd_mem_attach twice for aql) has the same problem.

If you just add one more line of context, you'll see that bo->vm_bo is
the start of a single linked list of struct amdgpu_vm_bo_base. So adding
a BO to a VM multiple times just extends that single-linked list:

        base->next = bo->vm_bo;
        bo->vm_bo = base;

Regards,
  Felix


>     +		if (unlikely(!attachment[i]->bo_va)) {
>     +			ret = -ENOMEM;
>     +			pr_err("Failed to add BO object to VM. ret == %d\n",
>     +			       ret);
>     +			goto unwind;
>     +		}
>
>     -	attachment->va = va;
>     -	attachment->pte_flags = get_pte_flags(adev, mem);
>     -	attachment->adev = adev;
>     -	list_add(&attachment->list, &mem->attachments);
>     +		attachment[i]->va = va;
>     +		attachment[i]->pte_flags = get_pte_flags(adev, mem);
>     +		attachment[i]->adev = adev;
>     +		list_add(&attachment[i]->list, &mem->attachments);
>
>     -	if (p_attachment)
>     -		*p_attachment = attachment;
>     +		va += bo_size;
>     +	}
>
>      	/* Allocate validate page tables if needed */
>      	ret = vm_validate_pt_pd_bos(vm);
>      	if (unlikely(ret)) {
>      		pr_err("validate_pt_pd_bos() failed\n");
>     -		goto err_alloc_pts;
>     +		goto unwind;
>      	}
>
>      	return 0;
>
>     -err_alloc_pts:
>     -	amdgpu_vm_bo_rmv(adev, attachment->bo_va);
>     -	list_del(&attachment->list);
>     -err_vmadd:
>     -	drm_gem_object_put(&bo->tbo.base);
>     -	kfree(attachment);
>     +unwind:
>     +	for (; i >= 0; i--) {
>     +		if (!attachment[i])
>     +			continue;
>     +		if (attachment[i]->bo_va) {
>     +			amdgpu_vm_bo_rmv(adev, attachment[i]->bo_va);
>     +			list_del(&attachment[i]->list);
>     +		}
>     +		if (bo[i])
>     +			drm_gem_object_put(&bo[i]->tbo.base);
>     +		kfree(attachment[i]);
>     +	}
>      	return ret;
>      }
>
>     @@ -1382,8 +1388,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      	uint32_t domain;
>      	struct kfd_mem_attachment *entry;
>      	struct bo_vm_reservation_context ctx;
>     -	struct kfd_mem_attachment *attachment = NULL;
>     -	struct kfd_mem_attachment *attachment_aql = NULL;
>      	unsigned long bo_size;
>      	bool is_invalid_userptr = false;
>
>     @@ -1433,15 +1437,9 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      		is_invalid_userptr = true;
>
>      	if (!kfd_mem_is_attached(avm, mem)) {
>     -		ret = kfd_mem_attach(adev, mem, avm, false, &attachment);
>     +		ret = kfd_mem_attach(adev, mem, avm, mem->aql_queue);
>      		if (ret)
>      			goto attach_failed;
>     -		if (mem->aql_queue) {
>     -			ret = kfd_mem_attach(adev, mem, avm, true,
>     -					     &attachment_aql);
>     -			if (ret)
>     -				goto attach_failed_aql;
>     -		}
>      	} else {
>      		ret = vm_validate_pt_pd_bos(avm);
>      		if (unlikely(ret))
>     @@ -1496,11 +1494,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      	goto out;
>
>      map_bo_to_gpuvm_failed:
>     -	if (attachment_aql)
>     -		kfd_mem_detach(attachment_aql);
>     -attach_failed_aql:
>     -	if (attachment)
>     -		kfd_mem_detach(attachment);
>      attach_failed:
>      	unreserve_bo_and_vms(&ctx, false, false);
>      out:
>     -- 
>     2.31.1
>
>     _______________________________________________
>     amd-gfx mailing list
>     amd-gfx@lists.freedesktop.org
>     https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Felix Kuehling <felix.kuehling@amd.com>
To: "Zeng, Oak" <Oak.Zeng@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v2 04/10] drm/amdgpu: Simplify AQL queue mapping
Date: Fri, 23 Apr 2021 03:23:20 -0400	[thread overview]
Message-ID: <d972007a-8108-0920-cbb2-d12798a357f9@amd.com> (raw)
In-Reply-To: <73C24A9C-4121-4A9F-A2E3-38B90D2179AC@amd.com>

Am 2021-04-22 um 9:33 p.m. schrieb Zeng, Oak:
> Regards,
> Oak 
>
>  
>
> On 2021-04-21, 9:31 PM, "amd-gfx on behalf of Felix Kuehling" <amd-gfx-bounces@lists.freedesktop.org on behalf of Felix.Kuehling@amd.com> wrote:
>
>     Do AQL queue double-mapping with a single attach call. That will make it
>     easier to create per-GPU BOs later, to be shared between the two BO VA
>     mappings on the same GPU.
>
>     Freeing the attachments is not necessary if map_to_gpu fails. These will be
>     cleaned up when the kdg_mem object is destroyed in
>     amdgpu_amdkfd_gpuvm_free_memory_of_gpu.
>
>     Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
>     ---
>      .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 103 ++++++++----------
>      1 file changed, 48 insertions(+), 55 deletions(-)
>
>     diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     index 34c9a2d0028e..fbd7e786b54e 100644
>     --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
>     @@ -486,70 +486,76 @@ static uint64_t get_pte_flags(struct amdgpu_device *adev, struct kgd_mem *mem)
>       * 4a.  Validate new page tables and directories
>       */
>      static int kfd_mem_attach(struct amdgpu_device *adev, struct kgd_mem *mem,
>     -		struct amdgpu_vm *vm, bool is_aql,
>     -		struct kfd_mem_attachment **p_attachment)
>     +		struct amdgpu_vm *vm, bool is_aql)
>      {
>      	unsigned long bo_size = mem->bo->tbo.base.size;
>      	uint64_t va = mem->va;
>     -	struct kfd_mem_attachment *attachment;
>     -	struct amdgpu_bo *bo;
>     -	int ret;
>     +	struct kfd_mem_attachment *attachment[2] = {NULL, NULL};
>     +	struct amdgpu_bo *bo[2] = {NULL, NULL};
>     +	int i, ret;
>
>      	if (!va) {
>      		pr_err("Invalid VA when adding BO to VM\n");
>      		return -EINVAL;
>      	}
>
>     -	if (is_aql)
>     -		va += bo_size;
>     -
>     -	attachment = kzalloc(sizeof(*attachment), GFP_KERNEL);
>     -	if (!attachment)
>     -		return -ENOMEM;
>     +	for (i = 0; i <= is_aql; i++) {
>     +		attachment[i] = kzalloc(sizeof(*attachment[i]), GFP_KERNEL);
>     +		if (unlikely(!attachment[i])) {
>     +			ret = -ENOMEM;
>     +			goto unwind;
>     +		}
>
>     -	pr_debug("\t add VA 0x%llx - 0x%llx to vm %p\n", va,
>     -			va + bo_size, vm);
>     +		pr_debug("\t add VA 0x%llx - 0x%llx to vm %p\n", va,
>     +			 va + bo_size, vm);
>
>     -	/* FIXME: For now all attachments use the same BO. This is incorrect
>     -	 * because one BO can only have one DMA mapping for one GPU. We need
>     -	 * one BO per GPU, e.g. a DMABuf import with dynamic attachment. This
>     -	 * will be addressed one BO-type at a time in subsequent patches.
>     -	 */
>     -	bo = mem->bo;
>     -	drm_gem_object_get(&bo->tbo.base);
>     +		/* FIXME: For now all attachments use the same BO. This is
>     +		 * incorrect because one BO can only have one DMA mapping
>     +		 * for one GPU. We need one BO per GPU, e.g. a DMABuf
>     +		 * import with dynamic attachment. This will be addressed
>     +		 * one BO-type at a time in subsequent patches.
>     +		 */
>     +		bo[i] = mem->bo;
>     +		drm_gem_object_get(&bo[i]->tbo.base);
>
>     -	/* Add BO to VM internal data structures*/
>     -	attachment->bo_va = amdgpu_vm_bo_add(adev, vm, bo);
>     -	if (!attachment->bo_va) {
>     -		ret = -EINVAL;
>     -		pr_err("Failed to add BO object to VM. ret == %d\n",
>     -				ret);
>     -		goto err_vmadd;
>     -	}
>     +		/* Add BO to VM internal data structures */
>     +		attachment[i]->bo_va = amdgpu_vm_bo_add(adev, vm, bo[i]);
> Just for discussion. Are we allowed to add one bo twice to a vm? When I looked at amdgpu_vm_bo_base_init (called by amdgpu_vm_bo_add), line:
> bo->vm_bo = base;
> when you add the same bo to vm the second time, bo->vm_bo will be overwritten. I am not sure whether this will cause an issue later.
> This is not introduced by your code. The original code (calling kfd_mem_attach twice for aql) has the same problem.

If you just add one more line of context, you'll see that bo->vm_bo is
the start of a single linked list of struct amdgpu_vm_bo_base. So adding
a BO to a VM multiple times just extends that single-linked list:

        base->next = bo->vm_bo;
        bo->vm_bo = base;

Regards,
  Felix


>     +		if (unlikely(!attachment[i]->bo_va)) {
>     +			ret = -ENOMEM;
>     +			pr_err("Failed to add BO object to VM. ret == %d\n",
>     +			       ret);
>     +			goto unwind;
>     +		}
>
>     -	attachment->va = va;
>     -	attachment->pte_flags = get_pte_flags(adev, mem);
>     -	attachment->adev = adev;
>     -	list_add(&attachment->list, &mem->attachments);
>     +		attachment[i]->va = va;
>     +		attachment[i]->pte_flags = get_pte_flags(adev, mem);
>     +		attachment[i]->adev = adev;
>     +		list_add(&attachment[i]->list, &mem->attachments);
>
>     -	if (p_attachment)
>     -		*p_attachment = attachment;
>     +		va += bo_size;
>     +	}
>
>      	/* Allocate validate page tables if needed */
>      	ret = vm_validate_pt_pd_bos(vm);
>      	if (unlikely(ret)) {
>      		pr_err("validate_pt_pd_bos() failed\n");
>     -		goto err_alloc_pts;
>     +		goto unwind;
>      	}
>
>      	return 0;
>
>     -err_alloc_pts:
>     -	amdgpu_vm_bo_rmv(adev, attachment->bo_va);
>     -	list_del(&attachment->list);
>     -err_vmadd:
>     -	drm_gem_object_put(&bo->tbo.base);
>     -	kfree(attachment);
>     +unwind:
>     +	for (; i >= 0; i--) {
>     +		if (!attachment[i])
>     +			continue;
>     +		if (attachment[i]->bo_va) {
>     +			amdgpu_vm_bo_rmv(adev, attachment[i]->bo_va);
>     +			list_del(&attachment[i]->list);
>     +		}
>     +		if (bo[i])
>     +			drm_gem_object_put(&bo[i]->tbo.base);
>     +		kfree(attachment[i]);
>     +	}
>      	return ret;
>      }
>
>     @@ -1382,8 +1388,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      	uint32_t domain;
>      	struct kfd_mem_attachment *entry;
>      	struct bo_vm_reservation_context ctx;
>     -	struct kfd_mem_attachment *attachment = NULL;
>     -	struct kfd_mem_attachment *attachment_aql = NULL;
>      	unsigned long bo_size;
>      	bool is_invalid_userptr = false;
>
>     @@ -1433,15 +1437,9 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      		is_invalid_userptr = true;
>
>      	if (!kfd_mem_is_attached(avm, mem)) {
>     -		ret = kfd_mem_attach(adev, mem, avm, false, &attachment);
>     +		ret = kfd_mem_attach(adev, mem, avm, mem->aql_queue);
>      		if (ret)
>      			goto attach_failed;
>     -		if (mem->aql_queue) {
>     -			ret = kfd_mem_attach(adev, mem, avm, true,
>     -					     &attachment_aql);
>     -			if (ret)
>     -				goto attach_failed_aql;
>     -		}
>      	} else {
>      		ret = vm_validate_pt_pd_bos(avm);
>      		if (unlikely(ret))
>     @@ -1496,11 +1494,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
>      	goto out;
>
>      map_bo_to_gpuvm_failed:
>     -	if (attachment_aql)
>     -		kfd_mem_detach(attachment_aql);
>     -attach_failed_aql:
>     -	if (attachment)
>     -		kfd_mem_detach(attachment);
>      attach_failed:
>      	unreserve_bo_and_vms(&ctx, false, false);
>      out:
>     -- 
>     2.31.1
>
>     _______________________________________________
>     amd-gfx mailing list
>     amd-gfx@lists.freedesktop.org
>     https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-04-23  7:23 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-22  1:30 [PATCH v2 00/10] Implement multi-GPU DMA mappings for KFD Felix Kuehling
2021-04-22  1:30 ` Felix Kuehling
2021-04-22  1:30 ` [PATCH v2 01/10] rock-dbg_defconfig: Enable Intel IOMMU Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-22  1:30 ` [PATCH v2 02/10] drm/amdgpu: Rename kfd_bo_va_list to kfd_mem_attachment Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-05-10 22:00   ` Errabolu, Ramesh
2021-05-10 22:00     ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 03/10] drm/amdgpu: Keep a bo-reference per-attachment Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-05-10 22:00   ` Errabolu, Ramesh
2021-05-10 22:00     ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 04/10] drm/amdgpu: Simplify AQL queue mapping Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-23  1:33   ` Zeng, Oak
2021-04-23  1:33     ` Zeng, Oak
2021-04-23  7:23     ` Felix Kuehling [this message]
2021-04-23  7:23       ` Felix Kuehling
2021-05-10 22:03       ` Errabolu, Ramesh
2021-05-10 22:03         ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 05/10] drm/amdgpu: Add multi-GPU DMA mapping helpers Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-27  0:09   ` Zeng, Oak
2021-04-27  0:09     ` Zeng, Oak
2021-04-27  3:41     ` Felix Kuehling
2021-04-27  3:41       ` Felix Kuehling
2021-05-10 22:05       ` Errabolu, Ramesh
2021-05-10 22:05         ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 06/10] drm/amdgpu: DMA map/unmap when updating GPU mappings Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-27  0:23   ` Zeng, Oak
2021-04-27  0:23     ` Zeng, Oak
2021-04-27  3:47     ` Felix Kuehling
2021-04-27  3:47       ` Felix Kuehling
2021-05-10 22:06       ` Errabolu, Ramesh
2021-05-10 22:06         ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 07/10] drm/amdgpu: Move kfd_mem_attach outside reservation Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-05-10 22:06   ` Errabolu, Ramesh
2021-05-10 22:06     ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 08/10] drm/amdgpu: Add DMA mapping of GTT BOs Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-27  0:35   ` Zeng, Oak
2021-04-27  0:35     ` Zeng, Oak
2021-04-27  3:56     ` Felix Kuehling
2021-04-27  3:56       ` Felix Kuehling
2021-04-27 14:29       ` Zeng, Oak
2021-04-27 14:29         ` Zeng, Oak
2021-04-27 15:08         ` Felix Kuehling
2021-04-27 15:08           ` Felix Kuehling
2021-05-10 22:07           ` Errabolu, Ramesh
2021-05-10 22:07             ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 09/10] drm/ttm: Don't count pages in SG BOs against pages_limit Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-05-10 22:08   ` Errabolu, Ramesh
2021-05-10 22:08     ` Errabolu, Ramesh
2021-04-22  1:30 ` [PATCH v2 10/10] drm/amdgpu: Move dmabuf attach/detach to backend_(un)bind Felix Kuehling
2021-04-22  1:30   ` Felix Kuehling
2021-04-22 11:20   ` Christian König
2021-04-22 11:20     ` Christian König
2021-05-10 22:09     ` Errabolu, Ramesh
2021-05-10 22:09       ` Errabolu, Ramesh
2021-04-27 15:16 ` [PATCH v2 00/10] Implement multi-GPU DMA mappings for KFD Zeng, Oak
2021-04-27 15:16   ` Zeng, Oak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d972007a-8108-0920-cbb2-d12798a357f9@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=Oak.Zeng@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.