All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] iommu/vt-d: Fix PASID leak
@ 2021-08-26 21:59 Fenghua Yu
  2021-08-26 21:59 ` [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM Fenghua Yu
  2021-08-27 12:24 ` [PATCH 1/2] iommu/vt-d: Fix PASID leak Lu Baolu
  0 siblings, 2 replies; 4+ messages in thread
From: Fenghua Yu @ 2021-08-26 21:59 UTC (permalink / raw)
  To: Lu Baolu, David Woodhouse, Jacob Jun Pan, Dave Jiang, Ashok Raj,
	Ravi V Shankar
  Cc: Fenghua Yu, iommu

mm->pasid will be used in intel_svm_free_pasid() after load_pasid()
during unbinding mm. Clearing it in load_pasid() will cause PASID cannot
be freed in intel_svm_free_pasid().

Additionally mm->pasid was updated already before load_pasid() during pasid
allocation. No need to update it again in load_pasid() during binding mm.

Don't update mm->pasid to avoid the issues in both binding mm and
unbinding mm.

Please note that load_pasid() and its called functions will be re-written
in upcoming re-enabling ENQCMD series. This patch tries to fix the issues
cleanly before the re-enabling ENQCMD series.

Fixes: 62ef907a045e ("iommu/vt-d: Fix PASID reference leak")

Reported-and-tested-by: Dave Jiang <dave.jiang@intel.com>
Co-developed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 drivers/iommu/intel/svm.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 4b9b3f35ba0e..ceeca633a5f9 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -516,9 +516,6 @@ static void load_pasid(struct mm_struct *mm, u32 pasid)
 {
 	mutex_lock(&mm->context.lock);
 
-	/* Synchronize with READ_ONCE in update_pasid(). */
-	smp_store_release(&mm->pasid, pasid);
-
 	/* Update PASID MSR on all CPUs running the mm's tasks. */
 	on_each_cpu_mask(mm_cpumask(mm), _load_pasid, NULL, true);
 
-- 
2.33.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM
  2021-08-26 21:59 [PATCH 1/2] iommu/vt-d: Fix PASID leak Fenghua Yu
@ 2021-08-26 21:59 ` Fenghua Yu
  2021-08-27 12:04   ` Lu Baolu
  2021-08-27 12:24 ` [PATCH 1/2] iommu/vt-d: Fix PASID leak Lu Baolu
  1 sibling, 1 reply; 4+ messages in thread
From: Fenghua Yu @ 2021-08-26 21:59 UTC (permalink / raw)
  To: Lu Baolu, David Woodhouse, Jacob Jun Pan, Dave Jiang, Ashok Raj,
	Ravi V Shankar
  Cc: Fenghua Yu, iommu

pasid_mutex and dev->iommu->param->lock are held while unbinding mm is
flushing IO page fault workqueue and waiting for all page fault works to
finish. But an in-flight page fault work also need to hold the two locks
while unbinding mm are holding them and waiting for the work to finish.
This may cause an ABBA deadlock issue as shown below:

[  186.588324] idxd 0000:00:0a.0: unbind PASID 2
[  186.596504]
[  186.598239] ======================================================
[  186.604664] WARNING: possible circular locking dependency detected
[  186.611074] 5.14.0-rc7+ #549 Not tainted [  186.615245] ------------------------------------------------------
[  186.621670] dsa_test/898 is trying to acquire lock:
[  186.626818] ffff888100d854e8 (&param->lock){+.+.}-{3:3}, at: iopf_queue_flush_dev+0x29/0x60
[  186.635433]
[  186.635433] but task is already holding lock:
[  186.641835] ffffffff82b2f7c8 (pasid_mutex){+.+.}-{3:3}, at: intel_svm_unbind+0x34/0
x1e0
[  186.650120]
[  186.650120] which lock already depends on the new lock.
[  186.650120]
[  186.659225]
[  186.659225] the existing dependency chain (in reverse order) is:
[  186.667340]
[  186.667340] -> #2 (pasid_mutex){+.+.}-{3:3}:
[  186.673744]        __mutex_lock+0x75/0x730
[  186.678164]        mutex_lock_nested+0x1b/0x20
[  186.682934]        intel_svm_page_response+0x8e/0x260
[  186.688324]        iommu_page_response+0x122/0x200
[  186.693448]        iopf_handle_group+0x1c2/0x240
[  186.698405]        process_one_work+0x2a5/0x5a0
[  186.703278]        worker_thread+0x55/0x400
[  186.707797]        kthread+0x13b/0x160
[  186.711885]        ret_from_fork+0x22/0x30
[  186.716323]
[  186.716323] -> #1 (&param->fault_param->lock){+.+.}-{3:3}:
[  186.723966]        __mutex_lock+0x75/0x730
[  186.728421]        mutex_lock_nested+0x1b/0x20
[  186.733214]        iommu_report_device_fault+0xc2/0x170
[  186.738788]        prq_event_thread+0x28a/0x580
[  186.743662]        irq_thread_fn+0x28/0x60
[  186.748105]        irq_thread+0xcf/0x180
[  186.752373]        kthread+0x13b/0x160
[  186.756465]        ret_from_fork+0x22/0x30
[  186.760907]
[  186.760907] -> #0 (&param->lock){+.+.}-{3:3}:
[  186.767430]        __lock_acquire+0x1134/0x1d60
[  186.772303]        lock_acquire+0xc6/0x2e0
[  186.776748]        __mutex_lock+0x75/0x730
[  186.781192]        mutex_lock_nested+0x1b/0x20
[  186.786069]        iopf_queue_flush_dev+0x29/0x60
[  186.791118]        intel_svm_drain_prq+0x127/0x210
[  186.796266]        intel_svm_unbind+0xc5/0x1e0
[  186.801047]        iommu_sva_unbind_device+0x62/0x80
[  186.806346]        idxd_cdev_release+0x15a/0x200 [idxd]
[  186.811913]        __fput+0x9c/0x250
[  186.815833]        ____fput+0xe/0x10
[  186.819741]        task_work_run+0x64/0xa0
[  186.824165]        exit_to_user_mode_prepare+0x227/0x230
[  186.829805]        syscall_exit_to_user_mode+0x2c/0x60
[  186.835260]        do_syscall_64+0x48/0x90
[  186.839681]        entry_SYSCALL_64_after_hwframe+0x44/0xae
[  186.845560]
[  186.845560] other info that might help us debug this:
[  186.845560]
[  186.854476] Chain exists of:
[  186.854476]   &param->lock --> &param->fault_param->lock --> pasid_mutex
[  186.854476]
[  186.866386]  Possible unsafe locking scenario:
[  186.866386]
[  186.872942]        CPU0                    CPU1
[  186.877754]        ----                    ----
[  186.882571]   lock(pasid_mutex);
[  186.886131]                                lock(&param->fault_param->lock);
[  186.893368]                                lock(pasid_mutex);
[  186.899405]   lock(&param->lock);
[  186.903009]
[  186.903009]  *** DEADLOCK ***
[  186.903009]
[  186.909791] 2 locks held by dsa_test/898:
[  186.914079]  #0: ffff888100cc1cc0 (&group->mutex){+.+.}-{3:3}, at: iommu_sva_unbind_device+0x53/0x80
[  186.923486]  #1: ffffffff82b2f7c8 (pasid_mutex){+.+.}-{3:3}, at: intel_svm_unbind+0x34/0x1e0
[  186.932214]
[  186.932214] stack backtrace:
[  186.937191] CPU: 2 PID: 898 Comm: dsa_test Not tainted 5.14.0-rc7+ #549
[  186.944108] Hardware name: Intel Corporation Kabylake Client platform/KBL S DDR4 UD IMM CRB, BIOS KBLSE2R1.R00.X050.P01.1608011715 08/01/2016 [  186.957346] Call Trace:
[  186.960131]  dump_stack_lvl+0x5b/0x74
[  186.964133]  dump_stack+0x10/0x12
[  186.967789]  print_circular_bug.cold+0x13d/0x142
[  186.972784]  check_noncircular+0xf1/0x110
[  186.977138]  __lock_acquire+0x1134/0x1d60
[  186.981483]  lock_acquire+0xc6/0x2e0
[  186.985397]  ? iopf_queue_flush_dev+0x29/0x60
[  186.990088]  ? pci_mmcfg_read+0xde/0x240
[  186.994355]  __mutex_lock+0x75/0x730
[  186.998276]  ? iopf_queue_flush_dev+0x29/0x60
[  187.002982]  ? pci_mmcfg_read+0xfd/0x240
[  187.007247]  ? iopf_queue_flush_dev+0x29/0x60
[  187.011948]  mutex_lock_nested+0x1b/0x20
[  187.016217]  iopf_queue_flush_dev+0x29/0x60
[  187.020788]  intel_svm_drain_prq+0x127/0x210
[  187.025396]  ? intel_pasid_tear_down_entry+0x22e/0x240
[  187.030881]  intel_svm_unbind+0xc5/0x1e0
[  187.035156]  iommu_sva_unbind_device+0x62/0x80
[  187.039955]  idxd_cdev_release+0x15a/0x200

pasid_mutex protects pasid and svm data mapping data. It's unnecessary
to hold pasid_mutex while flushing the workqueue. To fix the deadlock
issue, unlock pasid_pasid during flushing the workqueue to allow the works
to be handled.

Fixes: d5b9e4bfe0d8 ("iommu/vt-d: Report prq to io-pgfault framework")

Reported-and-tested-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 drivers/iommu/intel/svm.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index ceeca633a5f9..d575082567ca 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -793,7 +793,19 @@ static void intel_svm_drain_prq(struct device *dev, u32 pasid)
 		goto prq_retry;
 	}
 
+	/*
+	 * A work in IO page fault workqueue may try to lock pasid_mutex now.
+	 * Holding pasid_mutex while waiting in iopf_queue_flush_dev() for
+	 * all works in the workqueue to finish may cause deadlock.
+	 *
+	 * It's unnecessary to hold pasid_mutex in iopf_queue_flush_dev().
+	 * Unlock it to allow the works to be handled while waiting for
+	 * them to finish.
+	 */
+	lockdep_assert_held(&pasid_mutex);
+	mutex_unlock(&pasid_mutex);
 	iopf_queue_flush_dev(dev);
+	mutex_lock(&pasid_mutex);
 
 	/*
 	 * Perform steps described in VT-d spec CH7.10 to drain page
-- 
2.33.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM
  2021-08-26 21:59 ` [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM Fenghua Yu
@ 2021-08-27 12:04   ` Lu Baolu
  0 siblings, 0 replies; 4+ messages in thread
From: Lu Baolu @ 2021-08-27 12:04 UTC (permalink / raw)
  To: Fenghua Yu, David Woodhouse, Jacob Jun Pan, Dave Jiang,
	Ashok Raj, Ravi V Shankar
  Cc: iommu

Hi Fenghua,

On 2021/8/27 5:59, Fenghua Yu wrote:
> pasid_mutex and dev->iommu->param->lock are held while unbinding mm is
> flushing IO page fault workqueue and waiting for all page fault works to
> finish. But an in-flight page fault work also need to hold the two locks
> while unbinding mm are holding them and waiting for the work to finish.
> This may cause an ABBA deadlock issue as shown below:
> 
> [  186.588324] idxd 0000:00:0a.0: unbind PASID 2
> [  186.596504]
> [  186.598239] ======================================================
> [  186.604664] WARNING: possible circular locking dependency detected
> [  186.611074] 5.14.0-rc7+ #549 Not tainted [  186.615245] ------------------------------------------------------
> [  186.621670] dsa_test/898 is trying to acquire lock:
> [  186.626818] ffff888100d854e8 (&param->lock){+.+.}-{3:3}, at: iopf_queue_flush_dev+0x29/0x60
> [  186.635433]
> [  186.635433] but task is already holding lock:
> [  186.641835] ffffffff82b2f7c8 (pasid_mutex){+.+.}-{3:3}, at: intel_svm_unbind+0x34/0
> x1e0
> [  186.650120]
> [  186.650120] which lock already depends on the new lock.
> [  186.650120]
> [  186.659225]
> [  186.659225] the existing dependency chain (in reverse order) is:
> [  186.667340]
> [  186.667340] -> #2 (pasid_mutex){+.+.}-{3:3}:
> [  186.673744]        __mutex_lock+0x75/0x730
> [  186.678164]        mutex_lock_nested+0x1b/0x20
> [  186.682934]        intel_svm_page_response+0x8e/0x260
> [  186.688324]        iommu_page_response+0x122/0x200
> [  186.693448]        iopf_handle_group+0x1c2/0x240
> [  186.698405]        process_one_work+0x2a5/0x5a0
> [  186.703278]        worker_thread+0x55/0x400
> [  186.707797]        kthread+0x13b/0x160
> [  186.711885]        ret_from_fork+0x22/0x30
> [  186.716323]
> [  186.716323] -> #1 (&param->fault_param->lock){+.+.}-{3:3}:
> [  186.723966]        __mutex_lock+0x75/0x730
> [  186.728421]        mutex_lock_nested+0x1b/0x20
> [  186.733214]        iommu_report_device_fault+0xc2/0x170
> [  186.738788]        prq_event_thread+0x28a/0x580
> [  186.743662]        irq_thread_fn+0x28/0x60
> [  186.748105]        irq_thread+0xcf/0x180
> [  186.752373]        kthread+0x13b/0x160
> [  186.756465]        ret_from_fork+0x22/0x30
> [  186.760907]
> [  186.760907] -> #0 (&param->lock){+.+.}-{3:3}:
> [  186.767430]        __lock_acquire+0x1134/0x1d60
> [  186.772303]        lock_acquire+0xc6/0x2e0
> [  186.776748]        __mutex_lock+0x75/0x730
> [  186.781192]        mutex_lock_nested+0x1b/0x20
> [  186.786069]        iopf_queue_flush_dev+0x29/0x60
> [  186.791118]        intel_svm_drain_prq+0x127/0x210
> [  186.796266]        intel_svm_unbind+0xc5/0x1e0
> [  186.801047]        iommu_sva_unbind_device+0x62/0x80
> [  186.806346]        idxd_cdev_release+0x15a/0x200 [idxd]
> [  186.811913]        __fput+0x9c/0x250
> [  186.815833]        ____fput+0xe/0x10
> [  186.819741]        task_work_run+0x64/0xa0
> [  186.824165]        exit_to_user_mode_prepare+0x227/0x230
> [  186.829805]        syscall_exit_to_user_mode+0x2c/0x60
> [  186.835260]        do_syscall_64+0x48/0x90
> [  186.839681]        entry_SYSCALL_64_after_hwframe+0x44/0xae
> [  186.845560]
> [  186.845560] other info that might help us debug this:
> [  186.845560]
> [  186.854476] Chain exists of:
> [  186.854476]   &param->lock --> &param->fault_param->lock --> pasid_mutex
> [  186.854476]
> [  186.866386]  Possible unsafe locking scenario:
> [  186.866386]
> [  186.872942]        CPU0                    CPU1
> [  186.877754]        ----                    ----
> [  186.882571]   lock(pasid_mutex);
> [  186.886131]                                lock(&param->fault_param->lock);
> [  186.893368]                                lock(pasid_mutex);
> [  186.899405]   lock(&param->lock);
> [  186.903009]
> [  186.903009]  *** DEADLOCK ***
> [  186.903009]
> [  186.909791] 2 locks held by dsa_test/898:
> [  186.914079]  #0: ffff888100cc1cc0 (&group->mutex){+.+.}-{3:3}, at: iommu_sva_unbind_device+0x53/0x80
> [  186.923486]  #1: ffffffff82b2f7c8 (pasid_mutex){+.+.}-{3:3}, at: intel_svm_unbind+0x34/0x1e0
> [  186.932214]
> [  186.932214] stack backtrace:
> [  186.937191] CPU: 2 PID: 898 Comm: dsa_test Not tainted 5.14.0-rc7+ #549
> [  186.944108] Hardware name: Intel Corporation Kabylake Client platform/KBL S DDR4 UD IMM CRB, BIOS KBLSE2R1.R00.X050.P01.1608011715 08/01/2016 [  186.957346] Call Trace:
> [  186.960131]  dump_stack_lvl+0x5b/0x74
> [  186.964133]  dump_stack+0x10/0x12
> [  186.967789]  print_circular_bug.cold+0x13d/0x142
> [  186.972784]  check_noncircular+0xf1/0x110
> [  186.977138]  __lock_acquire+0x1134/0x1d60
> [  186.981483]  lock_acquire+0xc6/0x2e0
> [  186.985397]  ? iopf_queue_flush_dev+0x29/0x60
> [  186.990088]  ? pci_mmcfg_read+0xde/0x240
> [  186.994355]  __mutex_lock+0x75/0x730
> [  186.998276]  ? iopf_queue_flush_dev+0x29/0x60
> [  187.002982]  ? pci_mmcfg_read+0xfd/0x240
> [  187.007247]  ? iopf_queue_flush_dev+0x29/0x60
> [  187.011948]  mutex_lock_nested+0x1b/0x20
> [  187.016217]  iopf_queue_flush_dev+0x29/0x60
> [  187.020788]  intel_svm_drain_prq+0x127/0x210
> [  187.025396]  ? intel_pasid_tear_down_entry+0x22e/0x240
> [  187.030881]  intel_svm_unbind+0xc5/0x1e0
> [  187.035156]  iommu_sva_unbind_device+0x62/0x80
> [  187.039955]  idxd_cdev_release+0x15a/0x200
> 
> pasid_mutex protects pasid and svm data mapping data. It's unnecessary
> to hold pasid_mutex while flushing the workqueue. To fix the deadlock
> issue, unlock pasid_pasid during flushing the workqueue to allow the works
> to be handled.
> 
> Fixes: d5b9e4bfe0d8 ("iommu/vt-d: Report prq to io-pgfault framework")
> 
> Reported-and-tested-by: Dave Jiang <dave.jiang@intel.com>
> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> ---
>   drivers/iommu/intel/svm.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> index ceeca633a5f9..d575082567ca 100644
> --- a/drivers/iommu/intel/svm.c
> +++ b/drivers/iommu/intel/svm.c
> @@ -793,7 +793,19 @@ static void intel_svm_drain_prq(struct device *dev, u32 pasid)
>   		goto prq_retry;
>   	}
>   
> +	/*
> +	 * A work in IO page fault workqueue may try to lock pasid_mutex now.
> +	 * Holding pasid_mutex while waiting in iopf_queue_flush_dev() for
> +	 * all works in the workqueue to finish may cause deadlock.
> +	 *
> +	 * It's unnecessary to hold pasid_mutex in iopf_queue_flush_dev().
> +	 * Unlock it to allow the works to be handled while waiting for
> +	 * them to finish.
> +	 */
> +	lockdep_assert_held(&pasid_mutex);
> +	mutex_unlock(&pasid_mutex);
>   	iopf_queue_flush_dev(dev);
> +	mutex_lock(&pasid_mutex);
>   
>   	/*
>   	 * Perform steps described in VT-d spec CH7.10 to drain page
> 

Good catch. Thanks for the fix.

I've changed the commit title to "iommu/vt-d: Fix a deadlock in
intel_svm_drain_prq()" and queued it for iommu/fix.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Fix PASID leak
  2021-08-26 21:59 [PATCH 1/2] iommu/vt-d: Fix PASID leak Fenghua Yu
  2021-08-26 21:59 ` [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM Fenghua Yu
@ 2021-08-27 12:24 ` Lu Baolu
  1 sibling, 0 replies; 4+ messages in thread
From: Lu Baolu @ 2021-08-27 12:24 UTC (permalink / raw)
  To: Fenghua Yu, David Woodhouse, Jacob Jun Pan, Dave Jiang,
	Ashok Raj, Ravi V Shankar
  Cc: iommu

Hi Fenghua,

On 2021/8/27 5:59, Fenghua Yu wrote:
> mm->pasid will be used in intel_svm_free_pasid() after load_pasid()
> during unbinding mm. Clearing it in load_pasid() will cause PASID cannot
> be freed in intel_svm_free_pasid().
> 
> Additionally mm->pasid was updated already before load_pasid() during pasid
> allocation. No need to update it again in load_pasid() during binding mm.
> 
> Don't update mm->pasid to avoid the issues in both binding mm and
> unbinding mm.
> 
> Please note that load_pasid() and its called functions will be re-written
> in upcoming re-enabling ENQCMD series. This patch tries to fix the issues
> cleanly before the re-enabling ENQCMD series.

I will remove above paragraph since it's irrelevant to the issue and the
fix itself. This fix is needed not only for ENQCMD case.

> 
> Fixes: 62ef907a045e ("iommu/vt-d: Fix PASID reference leak")

This change actually fix below commit,

Fixes: 404837741416 ("iommu/vt-d: Use iommu_sva_alloc(free)_pasid() 
helpers")

> 
> Reported-and-tested-by: Dave Jiang <dave.jiang@intel.com>
> Co-developed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
> ---
>   drivers/iommu/intel/svm.c | 3 ---
>   1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
> index 4b9b3f35ba0e..ceeca633a5f9 100644
> --- a/drivers/iommu/intel/svm.c
> +++ b/drivers/iommu/intel/svm.c
> @@ -516,9 +516,6 @@ static void load_pasid(struct mm_struct *mm, u32 pasid)
>   {
>   	mutex_lock(&mm->context.lock);
>   
> -	/* Synchronize with READ_ONCE in update_pasid(). */
> -	smp_store_release(&mm->pasid, pasid);
> -
>   	/* Update PASID MSR on all CPUs running the mm's tasks. */
>   	on_each_cpu_mask(mm_cpumask(mm), _load_pasid, NULL, true);
>   
> 

I will change the commit title to "iommu/vt-d: Fix PASID leak in
intel_svm_unbind_mm()" and queued it for iommu/fix.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-08-27 12:24 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26 21:59 [PATCH 1/2] iommu/vt-d: Fix PASID leak Fenghua Yu
2021-08-26 21:59 ` [PATCH 2/2] iommu/vt-d: Fix a deadlock in SVM Fenghua Yu
2021-08-27 12:04   ` Lu Baolu
2021-08-27 12:24 ` [PATCH 1/2] iommu/vt-d: Fix PASID leak Lu Baolu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.