All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
@ 2021-11-02  2:40 Alex Sierra
  2021-11-02 14:18 ` Felix Kuehling
  2021-11-02 15:04 ` philip yang
  0 siblings, 2 replies; 6+ messages in thread
From: Alex Sierra @ 2021-11-02  2:40 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Sierra

[Why]:
When we call hmm_range_fault to map memory after a migration, we don't
expect memory to be migrated again as a result of hmm_range_fault. The
driver ensures that all memory is in GPU-accessible locations so that
no migration should be needed. However, there is one corner case where
hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE
back to system memory due to a write-fault when a system memory page in
the same range was mapped read-only (e.g. COW). Ranges with individual
pages in different locations are usually the result of failed page
migrations (e.g. page lock contention). The unexpected migration back
to system memory causes a deadlock from recursive locking in our
driver.

[How]:
Creating a task reference new member under svm_range_list_init struct.
Setting this with "current" reference, right before the hmm_range_fault
is called. This member is checked against "current" reference at
svm_migrate_to_ram callback function. If equal, the migration will be
ignored.

Signed-off-by: Alex Sierra <alex.sierra@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 4 ++++
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h    | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 2 ++
 3 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index bff40e8bca67..eb19f44ec86d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -936,6 +936,10 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
 		pr_debug("failed find process at fault address 0x%lx\n", addr);
 		return VM_FAULT_SIGBUS;
 	}
+	if (READ_ONCE(p->svms.faulting_task) == current) {
+		pr_debug("skipping ram migration\n");
+		return 0;
+	}
 	addr >>= PAGE_SHIFT;
 	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index f88666bdf57c..7b41a58b1ade 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -858,6 +858,7 @@ struct svm_range_list {
 	atomic_t			evicted_ranges;
 	struct delayed_work		restore_work;
 	DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE);
+	struct task_struct 		*faulting_task;
 };
 
 /* Process data */
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 939c863315ba..4031c2a67af4 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -1492,9 +1492,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
 
 		next = min(vma->vm_end, end);
 		npages = (next - addr) >> PAGE_SHIFT;
+		WRITE_ONCE(p->svms.faulting_task, current);
 		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
 					       addr, npages, &hmm_range,
 					       readonly, true, owner);
+		WRITE_ONCE(p->svms.faulting_task, NULL);
 		if (r) {
 			pr_debug("failed %d to get svm range pages\n", r);
 			goto unreserve_out;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
  2021-11-02  2:40 [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM Alex Sierra
@ 2021-11-02 14:18 ` Felix Kuehling
  2021-11-02 15:04 ` philip yang
  1 sibling, 0 replies; 6+ messages in thread
From: Felix Kuehling @ 2021-11-02 14:18 UTC (permalink / raw)
  To: Alex Sierra, amd-gfx

Am 2021-11-01 um 10:40 p.m. schrieb Alex Sierra:
> [Why]:
> When we call hmm_range_fault to map memory after a migration, we don't
> expect memory to be migrated again as a result of hmm_range_fault. The
> driver ensures that all memory is in GPU-accessible locations so that
> no migration should be needed. However, there is one corner case where
> hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE
> back to system memory due to a write-fault when a system memory page in
> the same range was mapped read-only (e.g. COW). Ranges with individual
> pages in different locations are usually the result of failed page
> migrations (e.g. page lock contention). The unexpected migration back
> to system memory causes a deadlock from recursive locking in our
> driver.
>
> [How]:
> Creating a task reference new member under svm_range_list_init struct.

The _init is not part of the struct name. With that fixed, the patch is

Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>


> Setting this with "current" reference, right before the hmm_range_fault
> is called. This member is checked against "current" reference at
> svm_migrate_to_ram callback function. If equal, the migration will be
> ignored.
>
> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 4 ++++
>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h    | 1 +
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 2 ++
>  3 files changed, 7 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index bff40e8bca67..eb19f44ec86d 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -936,6 +936,10 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
>  		pr_debug("failed find process at fault address 0x%lx\n", addr);
>  		return VM_FAULT_SIGBUS;
>  	}
> +	if (READ_ONCE(p->svms.faulting_task) == current) {
> +		pr_debug("skipping ram migration\n");
> +		return 0;
> +	}
>  	addr >>= PAGE_SHIFT;
>  	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> index f88666bdf57c..7b41a58b1ade 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> @@ -858,6 +858,7 @@ struct svm_range_list {
>  	atomic_t			evicted_ranges;
>  	struct delayed_work		restore_work;
>  	DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE);
> +	struct task_struct 		*faulting_task;
>  };
>  
>  /* Process data */
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index 939c863315ba..4031c2a67af4 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -1492,9 +1492,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
>  
>  		next = min(vma->vm_end, end);
>  		npages = (next - addr) >> PAGE_SHIFT;
> +		WRITE_ONCE(p->svms.faulting_task, current);
>  		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
>  					       addr, npages, &hmm_range,
>  					       readonly, true, owner);
> +		WRITE_ONCE(p->svms.faulting_task, NULL);
>  		if (r) {
>  			pr_debug("failed %d to get svm range pages\n", r);
>  			goto unreserve_out;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
  2021-11-02  2:40 [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM Alex Sierra
  2021-11-02 14:18 ` Felix Kuehling
@ 2021-11-02 15:04 ` philip yang
  2021-11-02 15:54   ` Sierra Guiza, Alejandro (Alex)
  1 sibling, 1 reply; 6+ messages in thread
From: philip yang @ 2021-11-02 15:04 UTC (permalink / raw)
  To: Alex Sierra, amd-gfx

[-- Attachment #1: Type: text/html, Size: 3859 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
  2021-11-02 15:04 ` philip yang
@ 2021-11-02 15:54   ` Sierra Guiza, Alejandro (Alex)
  0 siblings, 0 replies; 6+ messages in thread
From: Sierra Guiza, Alejandro (Alex) @ 2021-11-02 15:54 UTC (permalink / raw)
  To: philip yang, amd-gfx


On 11/2/2021 10:04 AM, philip yang wrote:
>
>
> On 2021-11-01 10:40 p.m., Alex Sierra wrote:
>> [Why]:
>> When we call hmm_range_fault to map memory after a migration, we don't
>> expect memory to be migrated again as a result of hmm_range_fault. The
>> driver ensures that all memory is in GPU-accessible locations so that
>> no migration should be needed. However, there is one corner case where
>> hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE
>> back to system memory due to a write-fault when a system memory page in
>> the same range was mapped read-only (e.g. COW). Ranges with individual
>> pages in different locations are usually the result of failed page
>> migrations (e.g. page lock contention). The unexpected migration back
>> to system memory causes a deadlock from recursive locking in our
>> driver.
>>
>> [How]:
>> Creating a task reference new member under svm_range_list_init struct.
>> Setting this with "current" reference, right before the hmm_range_fault
>> is called. This member is checked against "current" reference at
>> svm_migrate_to_ram callback function. If equal, the migration will be
>> ignored.
>>
>> Signed-off-by: Alex Sierra<alex.sierra@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 4 ++++
>>   drivers/gpu/drm/amd/amdkfd/kfd_priv.h    | 1 +
>>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 2 ++
>>   3 files changed, 7 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> index bff40e8bca67..eb19f44ec86d 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -936,6 +936,10 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
>>   		pr_debug("failed find process at fault address 0x%lx\n", addr);
>>   		return VM_FAULT_SIGBUS;
>>   	}
>> +	if (READ_ONCE(p->svms.faulting_task) == current) {
>> +		pr_debug("skipping ram migration\n");
>
> need release refcount to avoid process leaking
>
> kfd_unref_process(p);
>
Good catch. Thanks Phillip.

Alex Sierra

> Regards,
>
> Philip
>
>> +		return 0;
>> +	}
>>   	addr >>= PAGE_SHIFT;
>>   	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
>>   
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
>> index f88666bdf57c..7b41a58b1ade 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
>> @@ -858,6 +858,7 @@ struct svm_range_list {
>>   	atomic_t			evicted_ranges;
>>   	struct delayed_work		restore_work;
>>   	DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE);
>> +	struct task_struct 		*faulting_task;
>>   };
>>   
>>   /* Process data */
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>> index 939c863315ba..4031c2a67af4 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
>> @@ -1492,9 +1492,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
>>   
>>   		next = min(vma->vm_end, end);
>>   		npages = (next - addr) >> PAGE_SHIFT;
>> +		WRITE_ONCE(p->svms.faulting_task, current);
>>   		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
>>   					       addr, npages, &hmm_range,
>>   					       readonly, true, owner);
>> +		WRITE_ONCE(p->svms.faulting_task, NULL);
>>   		if (r) {
>>   			pr_debug("failed %d to get svm range pages\n", r);
>>   			goto unreserve_out;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
  2021-11-01 22:04 Alex Sierra
@ 2021-11-01 23:28 ` Felix Kuehling
  0 siblings, 0 replies; 6+ messages in thread
From: Felix Kuehling @ 2021-11-01 23:28 UTC (permalink / raw)
  To: Alex Sierra, amd-gfx

Am 2021-11-01 um 6:04 p.m. schrieb Alex Sierra:
> [Why]:
> During hmm_range_fault validation calls on VRAM migrations,

This sounds a bit confusing. I think the hmm_range_fault is not called
from a migration, but right after a migration, in the context of a GPU
page fault handler. I would explain this problem in a bit more detail:

When we call hmm_range_fault to map memory after a migration, we don't
expect memory to be migrated again as a result of hmm_range_fault. The
driver ensures that all memory is in GPU-accessible locations so that no
migration should be needed. However, there is one corner case where
hmm_range_fault can unexpectedly cause a migration from DEVICE_PRIVATE
back to system memory due to a write-fault when a system memory page in
the same range was mapped read-only (e.g. COW). Ranges with individual
pages in different locations are usually the result of failed page
migrations (e.g. page lock contention). The unexpected migration back to
system memory causes a deadlock from recursive locking in our driver.


>  there could
> be cases where some pages within the range could be marked as Read Only
> (COW) triggering a migration back to RAM. In this case, the migration to
> RAM will try to grab mutexes that have been held already before the
> hmm_range_fault call. Causing a recursive lock.
>
> [How]:
> Creating a task reference new member under prange struct. 

The task reference is not in the prange struct. It's in the
svm_range_list struct, which is a per-process structure.

One more nit-pick below.


> And setting
> this with "current" reference, right before the hmm_range_fault is
> called. This member is checked against "current" reference at
> svm_migrate_to_ram callback function. If equal, the migration will be
> ignored.
>
> Signed-off-by: Alex Sierra <alex.sierra@amd.com>
> ---
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 4 ++++
>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h    | 1 +
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 3 +++
>  3 files changed, 8 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index bff40e8bca67..eb19f44ec86d 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -936,6 +936,10 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
>  		pr_debug("failed find process at fault address 0x%lx\n", addr);
>  		return VM_FAULT_SIGBUS;
>  	}
> +	if (READ_ONCE(p->svms.faulting_task) == current) {
> +		pr_debug("skipping ram migration\n");
> +		return 0;
> +	}
>  	addr >>= PAGE_SHIFT;
>  	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> index f88666bdf57c..7b41a58b1ade 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> @@ -858,6 +858,7 @@ struct svm_range_list {
>  	atomic_t			evicted_ranges;
>  	struct delayed_work		restore_work;
>  	DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE);
> +	struct task_struct 		*faulting_task;
>  };
>  
>  /* Process data */
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index 939c863315ba..e9eeee2e571c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -1492,9 +1492,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
>  
>  		next = min(vma->vm_end, end);
>  		npages = (next - addr) >> PAGE_SHIFT;
> +		WRITE_ONCE(p->svms.faulting_task, current);
>  		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
>  					       addr, npages, &hmm_range,
>  					       readonly, true, owner);
> +		WRITE_ONCE(p->svms.faulting_task, NULL);
>  		if (r) {
>  			pr_debug("failed %d to get svm range pages\n", r);
>  			goto unreserve_out;
> @@ -2745,6 +2747,7 @@ int svm_range_list_init(struct kfd_process *p)
>  	INIT_WORK(&svms->deferred_list_work, svm_range_deferred_list_work);
>  	INIT_LIST_HEAD(&svms->deferred_range_list);
>  	spin_lock_init(&svms->deferred_list_lock);
> +	svms->faulting_task = NULL;

This initialization is redundant because the entire kfd_process
structure containing svms is 0-initialized when it's allocated with kzalloc.

Regards,
  Felix


>  
>  	for (i = 0; i < p->n_pdds; i++)
>  		if (KFD_IS_SVM_API_SUPPORTED(p->pdds[i]->dev))

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM
@ 2021-11-01 22:04 Alex Sierra
  2021-11-01 23:28 ` Felix Kuehling
  0 siblings, 1 reply; 6+ messages in thread
From: Alex Sierra @ 2021-11-01 22:04 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Sierra

[Why]:
During hmm_range_fault validation calls on VRAM migrations, there could
be cases where some pages within the range could be marked as Read Only
(COW) triggering a migration back to RAM. In this case, the migration to
RAM will try to grab mutexes that have been held already before the
hmm_range_fault call. Causing a recursive lock.

[How]:
Creating a task reference new member under prange struct. And setting
this with "current" reference, right before the hmm_range_fault is
called. This member is checked against "current" reference at
svm_migrate_to_ram callback function. If equal, the migration will be
ignored.

Signed-off-by: Alex Sierra <alex.sierra@amd.com>
---
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 4 ++++
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h    | 1 +
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c     | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index bff40e8bca67..eb19f44ec86d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -936,6 +936,10 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf)
 		pr_debug("failed find process at fault address 0x%lx\n", addr);
 		return VM_FAULT_SIGBUS;
 	}
+	if (READ_ONCE(p->svms.faulting_task) == current) {
+		pr_debug("skipping ram migration\n");
+		return 0;
+	}
 	addr >>= PAGE_SHIFT;
 	pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index f88666bdf57c..7b41a58b1ade 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -858,6 +858,7 @@ struct svm_range_list {
 	atomic_t			evicted_ranges;
 	struct delayed_work		restore_work;
 	DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE);
+	struct task_struct 		*faulting_task;
 };
 
 /* Process data */
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 939c863315ba..e9eeee2e571c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -1492,9 +1492,11 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
 
 		next = min(vma->vm_end, end);
 		npages = (next - addr) >> PAGE_SHIFT;
+		WRITE_ONCE(p->svms.faulting_task, current);
 		r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL,
 					       addr, npages, &hmm_range,
 					       readonly, true, owner);
+		WRITE_ONCE(p->svms.faulting_task, NULL);
 		if (r) {
 			pr_debug("failed %d to get svm range pages\n", r);
 			goto unreserve_out;
@@ -2745,6 +2747,7 @@ int svm_range_list_init(struct kfd_process *p)
 	INIT_WORK(&svms->deferred_list_work, svm_range_deferred_list_work);
 	INIT_LIST_HEAD(&svms->deferred_range_list);
 	spin_lock_init(&svms->deferred_list_lock);
+	svms->faulting_task = NULL;
 
 	for (i = 0; i < p->n_pdds; i++)
 		if (KFD_IS_SVM_API_SUPPORTED(p->pdds[i]->dev))
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-11-02 15:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-02  2:40 [PATCH] drm/amdkfd: avoid recursive lock in migrations back to RAM Alex Sierra
2021-11-02 14:18 ` Felix Kuehling
2021-11-02 15:04 ` philip yang
2021-11-02 15:54   ` Sierra Guiza, Alejandro (Alex)
  -- strict thread matches above, loose matches on Subject: below --
2021-11-01 22:04 Alex Sierra
2021-11-01 23:28 ` Felix Kuehling

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.