All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-14  0:03 ` Matthew Rosato
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-14  0:03 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: jgg, cohuck, farman, pmorel, borntraeger, frankja, imbrenda,
	david, akrowiak, jjherne, pasic, zhenyuw, zhi.a.wang, seanjc,
	linux-s390, kvm, intel-gvt-dev, intel-gfx, linux-kernel

Currently it is possible that the final put of a KVM reference comes from
vfio during its device close operation.  This occurs while the vfio group
lock is held; however, if the vfio device is still in the kvm device list,
then the following call chain could result in a deadlock:

kvm_put_kvm
 -> kvm_destroy_vm
  -> kvm_destroy_devices
   -> kvm_vfio_destroy
    -> kvm_vfio_file_set_kvm
     -> vfio_file_set_kvm
      -> group->group_lock/group_rwsem

Avoid this scenario by having vfio core code acquire a KVM reference
the first time a device is opened and hold that reference until right
after the group lock is released after the last device is closed.

Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
---
Changes from v3:
* Can't check for open_count after the group lock has been dropped because
  it would be possible for the count to change again once the group lock
  is dropped (Jason)
  Solve this by stashing a copy of the kvm and put_kvm while the group
  lock is held, nullifying the device copies of these in device_close()
  as soon as the open_count reaches 0, and then checking to see if the
  device->kvm changed before dropping the group lock.  If it changed
  during close, we can drop the reference using the stashed kvm and put
  function after dropping the group lock.

Changes from v2:
* Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
  device_open with device->kvm=NULL (Alex)
* get device->dev_set->lock when checking device->open_count (Alex)
* but don't hold it over the kvm_put_kvm (Jason)
* get kvm_put symbol upfront and stash it in device until close (Jason)
* check CONFIG_HAVE_KVM to avoid build errors on architectures without
  KVM support

Changes from v1:
* Re-write using symbol get logic to get kvm ref during first device
  open, release the ref during device fd close after group lock is
  released
* Drop kvm get/put changes to drivers; now that vfio core holds a
  kvm ref until sometime after the device_close op is called, it
  should be fine for drivers to get and put their own references to it.
---
 drivers/vfio/group.c     | 23 +++++++++++++--
 drivers/vfio/vfio.h      |  9 ++++++
 drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
 include/linux/vfio.h     |  2 +-
 4 files changed, 87 insertions(+), 8 deletions(-)

diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
index bb24b2f0271e..b396c17d7390 100644
--- a/drivers/vfio/group.c
+++ b/drivers/vfio/group.c
@@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
 	}
 
 	/*
-	 * Here we pass the KVM pointer with the group under the lock.  If the
-	 * device driver will use it, it must obtain a reference and release it
-	 * during close_device.
+	 * Here we pass the KVM pointer with the group under the lock.  A
+	 * reference will be obtained the first time the device is opened and
+	 * will be held until the open_count reaches 0.
 	 */
 	ret = vfio_device_open(device, device->group->iommufd,
 			       device->group->kvm);
@@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
 
 void vfio_device_group_close(struct vfio_device *device)
 {
+	void (*put_kvm)(struct kvm *kvm);
+	struct kvm *kvm;
+
 	mutex_lock(&device->group->group_lock);
+	kvm = device->kvm;
+	put_kvm = device->put_kvm;
 	vfio_device_close(device, device->group->iommufd);
+	if (kvm == device->kvm)
+		kvm = NULL;
 	mutex_unlock(&device->group->group_lock);
+
+	/*
+	 * The last kvm reference will trigger kvm_destroy_vm, which can in
+	 * turn re-enter vfio and attempt to acquire the group lock.  Therefore
+	 * we get a copy of the kvm pointer and the put function under the
+	 * group lock but wait to put that reference until after releasing the
+	 * lock.
+	 */
+	if (kvm)
+		vfio_kvm_put_kvm(put_kvm, kvm);
 }
 
 static struct file *vfio_device_open_file(struct vfio_device *device)
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index f8219a438bfb..08a5a23d6fef 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -251,4 +251,13 @@ extern bool vfio_noiommu __read_mostly;
 enum { vfio_noiommu = false };
 #endif
 
+#ifdef CONFIG_HAVE_KVM
+void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm);
+#else
+static inline void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm),
+				    struct kvm *kvm)
+{
+}
+#endif
+
 #endif
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 5177bb061b17..c6bb07af46b8 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -16,6 +16,9 @@
 #include <linux/fs.h>
 #include <linux/idr.h>
 #include <linux/iommu.h>
+#ifdef CONFIG_HAVE_KVM
+#include <linux/kvm_host.h>
+#endif
 #include <linux/list.h>
 #include <linux/miscdevice.h>
 #include <linux/module.h>
@@ -344,6 +347,49 @@ static bool vfio_assert_device_open(struct vfio_device *device)
 	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
 }
 
+#ifdef CONFIG_HAVE_KVM
+static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm)
+{
+	void (*pfn)(struct kvm *kvm);
+	bool (*fn)(struct kvm *kvm);
+	bool ret;
+
+	pfn = symbol_get(kvm_put_kvm);
+	if (WARN_ON(!pfn))
+		return false;
+
+	fn = symbol_get(kvm_get_kvm_safe);
+	if (WARN_ON(!fn)) {
+		symbol_put(kvm_put_kvm);
+		return false;
+	}
+
+	ret = fn(kvm);
+	if (ret)
+		device->put_kvm = pfn;
+	else
+		symbol_put(kvm_put_kvm);
+
+	symbol_put(kvm_get_kvm_safe);
+
+	return ret;
+}
+
+void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
+{
+	if (WARN_ON(!put))
+		return;
+
+	put(kvm);
+	symbol_put(kvm_put_kvm);
+}
+#else
+static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm)
+{
+	return false;
+}
+#endif
+
 static int vfio_device_first_open(struct vfio_device *device,
 				  struct iommufd_ctx *iommufd, struct kvm *kvm)
 {
@@ -361,16 +407,22 @@ static int vfio_device_first_open(struct vfio_device *device,
 	if (ret)
 		goto err_module_put;
 
-	device->kvm = kvm;
+	if (kvm && vfio_kvm_get_kvm_safe(device, kvm))
+		device->kvm = kvm;
+
 	if (device->ops->open_device) {
 		ret = device->ops->open_device(device);
 		if (ret)
-			goto err_unuse_iommu;
+			goto err_put_kvm;
 	}
 	return 0;
 
-err_unuse_iommu:
-	device->kvm = NULL;
+err_put_kvm:
+	if (device->kvm) {
+		vfio_kvm_put_kvm(device->put_kvm, device->kvm);
+		device->put_kvm = NULL;
+		device->kvm = NULL;
+	}
 	if (iommufd)
 		vfio_iommufd_unbind(device);
 	else
@@ -388,6 +440,7 @@ static void vfio_device_last_close(struct vfio_device *device,
 	if (device->ops->close_device)
 		device->ops->close_device(device);
 	device->kvm = NULL;
+	device->put_kvm = NULL;
 	if (iommufd)
 		vfio_iommufd_unbind(device);
 	else
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 35be78e9ae57..87ff862ff555 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -46,7 +46,6 @@ struct vfio_device {
 	struct vfio_device_set *dev_set;
 	struct list_head dev_set_list;
 	unsigned int migration_flags;
-	/* Driver must reference the kvm during open_device or never touch it */
 	struct kvm *kvm;
 
 	/* Members below here are private, not for driver use */
@@ -58,6 +57,7 @@ struct vfio_device {
 	struct list_head group_next;
 	struct list_head iommu_entry;
 	struct iommufd_access *iommufd_access;
+	void (*put_kvm)(struct kvm *kvm);
 #if IS_ENABLED(CONFIG_IOMMUFD)
 	struct iommufd_device *iommufd_device;
 	struct iommufd_ctx *iommufd_ictx;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-14  0:03 ` Matthew Rosato
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-14  0:03 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	seanjc, intel-gfx, cohuck, linux-kernel, pasic, jgg, kvm,
	linux-s390, borntraeger, intel-gvt-dev

Currently it is possible that the final put of a KVM reference comes from
vfio during its device close operation.  This occurs while the vfio group
lock is held; however, if the vfio device is still in the kvm device list,
then the following call chain could result in a deadlock:

kvm_put_kvm
 -> kvm_destroy_vm
  -> kvm_destroy_devices
   -> kvm_vfio_destroy
    -> kvm_vfio_file_set_kvm
     -> vfio_file_set_kvm
      -> group->group_lock/group_rwsem

Avoid this scenario by having vfio core code acquire a KVM reference
the first time a device is opened and hold that reference until right
after the group lock is released after the last device is closed.

Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
---
Changes from v3:
* Can't check for open_count after the group lock has been dropped because
  it would be possible for the count to change again once the group lock
  is dropped (Jason)
  Solve this by stashing a copy of the kvm and put_kvm while the group
  lock is held, nullifying the device copies of these in device_close()
  as soon as the open_count reaches 0, and then checking to see if the
  device->kvm changed before dropping the group lock.  If it changed
  during close, we can drop the reference using the stashed kvm and put
  function after dropping the group lock.

Changes from v2:
* Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
  device_open with device->kvm=NULL (Alex)
* get device->dev_set->lock when checking device->open_count (Alex)
* but don't hold it over the kvm_put_kvm (Jason)
* get kvm_put symbol upfront and stash it in device until close (Jason)
* check CONFIG_HAVE_KVM to avoid build errors on architectures without
  KVM support

Changes from v1:
* Re-write using symbol get logic to get kvm ref during first device
  open, release the ref during device fd close after group lock is
  released
* Drop kvm get/put changes to drivers; now that vfio core holds a
  kvm ref until sometime after the device_close op is called, it
  should be fine for drivers to get and put their own references to it.
---
 drivers/vfio/group.c     | 23 +++++++++++++--
 drivers/vfio/vfio.h      |  9 ++++++
 drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
 include/linux/vfio.h     |  2 +-
 4 files changed, 87 insertions(+), 8 deletions(-)

diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
index bb24b2f0271e..b396c17d7390 100644
--- a/drivers/vfio/group.c
+++ b/drivers/vfio/group.c
@@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
 	}
 
 	/*
-	 * Here we pass the KVM pointer with the group under the lock.  If the
-	 * device driver will use it, it must obtain a reference and release it
-	 * during close_device.
+	 * Here we pass the KVM pointer with the group under the lock.  A
+	 * reference will be obtained the first time the device is opened and
+	 * will be held until the open_count reaches 0.
 	 */
 	ret = vfio_device_open(device, device->group->iommufd,
 			       device->group->kvm);
@@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
 
 void vfio_device_group_close(struct vfio_device *device)
 {
+	void (*put_kvm)(struct kvm *kvm);
+	struct kvm *kvm;
+
 	mutex_lock(&device->group->group_lock);
+	kvm = device->kvm;
+	put_kvm = device->put_kvm;
 	vfio_device_close(device, device->group->iommufd);
+	if (kvm == device->kvm)
+		kvm = NULL;
 	mutex_unlock(&device->group->group_lock);
+
+	/*
+	 * The last kvm reference will trigger kvm_destroy_vm, which can in
+	 * turn re-enter vfio and attempt to acquire the group lock.  Therefore
+	 * we get a copy of the kvm pointer and the put function under the
+	 * group lock but wait to put that reference until after releasing the
+	 * lock.
+	 */
+	if (kvm)
+		vfio_kvm_put_kvm(put_kvm, kvm);
 }
 
 static struct file *vfio_device_open_file(struct vfio_device *device)
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index f8219a438bfb..08a5a23d6fef 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -251,4 +251,13 @@ extern bool vfio_noiommu __read_mostly;
 enum { vfio_noiommu = false };
 #endif
 
+#ifdef CONFIG_HAVE_KVM
+void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm);
+#else
+static inline void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm),
+				    struct kvm *kvm)
+{
+}
+#endif
+
 #endif
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 5177bb061b17..c6bb07af46b8 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -16,6 +16,9 @@
 #include <linux/fs.h>
 #include <linux/idr.h>
 #include <linux/iommu.h>
+#ifdef CONFIG_HAVE_KVM
+#include <linux/kvm_host.h>
+#endif
 #include <linux/list.h>
 #include <linux/miscdevice.h>
 #include <linux/module.h>
@@ -344,6 +347,49 @@ static bool vfio_assert_device_open(struct vfio_device *device)
 	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
 }
 
+#ifdef CONFIG_HAVE_KVM
+static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm)
+{
+	void (*pfn)(struct kvm *kvm);
+	bool (*fn)(struct kvm *kvm);
+	bool ret;
+
+	pfn = symbol_get(kvm_put_kvm);
+	if (WARN_ON(!pfn))
+		return false;
+
+	fn = symbol_get(kvm_get_kvm_safe);
+	if (WARN_ON(!fn)) {
+		symbol_put(kvm_put_kvm);
+		return false;
+	}
+
+	ret = fn(kvm);
+	if (ret)
+		device->put_kvm = pfn;
+	else
+		symbol_put(kvm_put_kvm);
+
+	symbol_put(kvm_get_kvm_safe);
+
+	return ret;
+}
+
+void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
+{
+	if (WARN_ON(!put))
+		return;
+
+	put(kvm);
+	symbol_put(kvm_put_kvm);
+}
+#else
+static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct kvm *kvm)
+{
+	return false;
+}
+#endif
+
 static int vfio_device_first_open(struct vfio_device *device,
 				  struct iommufd_ctx *iommufd, struct kvm *kvm)
 {
@@ -361,16 +407,22 @@ static int vfio_device_first_open(struct vfio_device *device,
 	if (ret)
 		goto err_module_put;
 
-	device->kvm = kvm;
+	if (kvm && vfio_kvm_get_kvm_safe(device, kvm))
+		device->kvm = kvm;
+
 	if (device->ops->open_device) {
 		ret = device->ops->open_device(device);
 		if (ret)
-			goto err_unuse_iommu;
+			goto err_put_kvm;
 	}
 	return 0;
 
-err_unuse_iommu:
-	device->kvm = NULL;
+err_put_kvm:
+	if (device->kvm) {
+		vfio_kvm_put_kvm(device->put_kvm, device->kvm);
+		device->put_kvm = NULL;
+		device->kvm = NULL;
+	}
 	if (iommufd)
 		vfio_iommufd_unbind(device);
 	else
@@ -388,6 +440,7 @@ static void vfio_device_last_close(struct vfio_device *device,
 	if (device->ops->close_device)
 		device->ops->close_device(device);
 	device->kvm = NULL;
+	device->put_kvm = NULL;
 	if (iommufd)
 		vfio_iommufd_unbind(device);
 	else
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 35be78e9ae57..87ff862ff555 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -46,7 +46,6 @@ struct vfio_device {
 	struct vfio_device_set *dev_set;
 	struct list_head dev_set_list;
 	unsigned int migration_flags;
-	/* Driver must reference the kvm during open_device or never touch it */
 	struct kvm *kvm;
 
 	/* Members below here are private, not for driver use */
@@ -58,6 +57,7 @@ struct vfio_device {
 	struct list_head group_next;
 	struct list_head iommu_entry;
 	struct iommufd_access *iommufd_access;
+	void (*put_kvm)(struct kvm *kvm);
 #if IS_ENABLED(CONFIG_IOMMUFD)
 	struct iommufd_device *iommufd_device;
 	struct iommufd_ctx *iommufd_ictx;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for vfio: fix potential deadlock on vfio group lock (rev3)
  2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
  (?)
@ 2023-01-14  1:12 ` Patchwork
  -1 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-01-14  1:12 UTC (permalink / raw)
  To: Matthew Rosato; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 5052 bytes --]

== Series Details ==

Series: vfio: fix potential deadlock on vfio group lock (rev3)
URL   : https://patchwork.freedesktop.org/series/112759/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12585 -> Patchwork_112759v3
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/index.html

Participating hosts (44 -> 43)
------------------------------

  Missing    (1): fi-snb-2520m 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_112759v3:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@gem_exec_suspend@basic-s0@smem:
    - {bat-adlm-1}:       [PASS][1] -> [DMESG-WARN][2] +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/bat-adlm-1/igt@gem_exec_suspend@basic-s0@smem.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/bat-adlm-1/igt@gem_exec_suspend@basic-s0@smem.html

  * igt@i915_selftest@live@gt_mocs:
    - {bat-rpls-2}:       [PASS][3] -> [DMESG-FAIL][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/bat-rpls-2/igt@i915_selftest@live@gt_mocs.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/bat-rpls-2/igt@i915_selftest@live@gt_mocs.html

  
Known issues
------------

  Here are the changes found in Patchwork_112759v3 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@execlists:
    - fi-kbl-soraka:      [PASS][5] -> [INCOMPLETE][6] ([i915#7156])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/fi-kbl-soraka/igt@i915_selftest@live@execlists.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/fi-kbl-soraka/igt@i915_selftest@live@execlists.html

  * igt@i915_selftest@live@gt_lrc:
    - fi-rkl-guc:         [PASS][7] -> [INCOMPLETE][8] ([i915#4983])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/fi-rkl-guc/igt@i915_selftest@live@gt_lrc.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/fi-rkl-guc/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic:
    - fi-bsw-kefka:       [PASS][9] -> [FAIL][10] ([i915#2346])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor@atomic.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@gt_pm:
    - {bat-rpls-2}:       [DMESG-FAIL][11] ([i915#4258]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/bat-rpls-2/igt@i915_selftest@live@gt_pm.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/bat-rpls-2/igt@i915_selftest@live@gt_pm.html

  * igt@i915_selftest@live@hangcheck:
    - {bat-dg2-11}:       [INCOMPLETE][13] ([i915#7834]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/bat-dg2-11/igt@i915_selftest@live@hangcheck.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/bat-dg2-11/igt@i915_selftest@live@hangcheck.html

  * igt@i915_selftest@live@requests:
    - {bat-rpls-1}:       [INCOMPLETE][15] ([i915#6257]) -> [PASS][16]
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/bat-rpls-1/igt@i915_selftest@live@requests.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/bat-rpls-1/igt@i915_selftest@live@requests.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#4258]: https://gitlab.freedesktop.org/drm/intel/issues/4258
  [i915#4983]: https://gitlab.freedesktop.org/drm/intel/issues/4983
  [i915#6257]: https://gitlab.freedesktop.org/drm/intel/issues/6257
  [i915#7156]: https://gitlab.freedesktop.org/drm/intel/issues/7156
  [i915#7625]: https://gitlab.freedesktop.org/drm/intel/issues/7625
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7834]: https://gitlab.freedesktop.org/drm/intel/issues/7834


Build changes
-------------

  * Linux: CI_DRM_12585 -> Patchwork_112759v3

  CI-20190529: 20190529
  CI_DRM_12585: 68d139b609a97a83e7c231189d4864aba4e1679b @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7119: 1e6d24e6dfa42b22f950f7d5e436b8f9acf8747f @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_112759v3: 68d139b609a97a83e7c231189d4864aba4e1679b @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

2806b1096ec5 vfio: fix potential deadlock on vfio group lock

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/index.html

[-- Attachment #2: Type: text/html, Size: 5739 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for vfio: fix potential deadlock on vfio group lock (rev3)
  2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
  (?)
  (?)
@ 2023-01-14  8:37 ` Patchwork
  -1 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2023-01-14  8:37 UTC (permalink / raw)
  To: Matthew Rosato; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 21917 bytes --]

== Series Details ==

Series: vfio: fix potential deadlock on vfio group lock (rev3)
URL   : https://patchwork.freedesktop.org/series/112759/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_12585_full -> Patchwork_112759v3_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/index.html

Participating hosts (12 -> 10)
------------------------------

  Additional (1): shard-rkl0 
  Missing    (3): pig-skl-6260u pig-kbl-iris pig-glk-j5005 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_112759v3_full:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@kms_addfb_basic@addfb25-y-tiled-legacy:
    - {shard-dg1}:        [PASS][1] -> [DMESG-WARN][2] +2 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-dg1-14/igt@kms_addfb_basic@addfb25-y-tiled-legacy.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-dg1-14/igt@kms_addfb_basic@addfb25-y-tiled-legacy.html

  
Known issues
------------

  Here are the changes found in Patchwork_112759v3_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@fbdev@write:
    - shard-glk:          [PASS][3] -> [FAIL][4] ([i915#6724])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-glk1/igt@fbdev@write.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-glk4/igt@fbdev@write.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][5] -> [FAIL][6] ([i915#2842])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-glk4/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-glk2/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@i915_selftest@live@gt_heartbeat:
    - shard-glk:          [PASS][7] -> [DMESG-FAIL][8] ([i915#5334])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-glk7/igt@i915_selftest@live@gt_heartbeat.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-glk8/igt@i915_selftest@live@gt_heartbeat.html

  * igt@perf@stress-open-close:
    - shard-glk:          [PASS][9] -> [INCOMPLETE][10] ([i915#5213])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-glk2/igt@perf@stress-open-close.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-glk7/igt@perf@stress-open-close.html

  * igt@runner@aborted:
    - shard-glk:          NOTRUN -> [FAIL][11] ([i915#4312])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-glk7/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@drm_fdinfo@idle@rcs0:
    - {shard-rkl}:        [FAIL][12] ([i915#7742]) -> [PASS][13] +1 similar issue
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-2/igt@drm_fdinfo@idle@rcs0.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@drm_fdinfo@idle@rcs0.html

  * igt@drm_read@short-buffer-block:
    - {shard-rkl}:        [SKIP][14] ([i915#4098]) -> [PASS][15]
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@drm_read@short-buffer-block.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@drm_read@short-buffer-block.html

  * igt@fbdev@pan:
    - {shard-rkl}:        [SKIP][16] ([i915#2582]) -> [PASS][17] +1 similar issue
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@fbdev@pan.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@fbdev@pan.html

  * igt@fbdev@read:
    - {shard-tglu}:       [SKIP][18] ([i915#2582]) -> [PASS][19]
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@fbdev@read.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@fbdev@read.html

  * igt@fbdev@write:
    - {shard-dg1}:        [FAIL][20] ([i915#7863]) -> [PASS][21]
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-dg1-18/igt@fbdev@write.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-dg1-15/igt@fbdev@write.html

  * igt@gem_exec_fair@basic-pace@rcs0:
    - {shard-rkl}:        [FAIL][22] ([i915#2842]) -> [PASS][23] +3 similar issues
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-4/igt@gem_exec_fair@basic-pace@rcs0.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@gem_exec_fair@basic-pace@rcs0.html

  * igt@gem_exec_reloc@basic-gtt-read-noreloc:
    - {shard-rkl}:        [SKIP][24] ([i915#3281]) -> [PASS][25] +9 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-2/igt@gem_exec_reloc@basic-gtt-read-noreloc.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@gem_exec_reloc@basic-gtt-read-noreloc.html

  * igt@gem_mmap_wc@set-cache-level:
    - {shard-tglu}:       [SKIP][26] ([i915#1850]) -> [PASS][27]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@gem_mmap_wc@set-cache-level.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@gem_mmap_wc@set-cache-level.html
    - {shard-rkl}:        [SKIP][28] ([i915#1850]) -> [PASS][29]
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@gem_mmap_wc@set-cache-level.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@gem_mmap_wc@set-cache-level.html

  * igt@gem_pwrite@basic-self:
    - {shard-rkl}:        [SKIP][30] ([i915#3282]) -> [PASS][31] +3 similar issues
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-2/igt@gem_pwrite@basic-self.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@gem_pwrite@basic-self.html

  * igt@gen9_exec_parse@batch-invalid-length:
    - {shard-rkl}:        [SKIP][32] ([i915#2527]) -> [PASS][33] +1 similar issue
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-2/igt@gen9_exec_parse@batch-invalid-length.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@gen9_exec_parse@batch-invalid-length.html

  * igt@i915_pm_rpm@cursor:
    - {shard-tglu}:       [SKIP][34] ([i915#1849]) -> [PASS][35] +3 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@i915_pm_rpm@cursor.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@i915_pm_rpm@cursor.html

  * igt@i915_pm_rpm@system-suspend-modeset:
    - {shard-rkl}:        [SKIP][36] ([fdo#109308]) -> [PASS][37]
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@i915_pm_rpm@system-suspend-modeset.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@i915_pm_rpm@system-suspend-modeset.html
    - {shard-tglu}:       [SKIP][38] ([i915#3547]) -> [PASS][39]
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@i915_pm_rpm@system-suspend-modeset.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@i915_pm_rpm@system-suspend-modeset.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite:
    - {shard-rkl}:        [SKIP][40] ([i915#1849] / [i915#4098]) -> [PASS][41] +16 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-pwrite.html

  * igt@kms_plane@plane-panning-bottom-right@pipe-a-planes:
    - {shard-rkl}:        [SKIP][42] ([i915#1849]) -> [PASS][43] +2 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@kms_plane@plane-panning-bottom-right@pipe-a-planes.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@kms_plane@plane-panning-bottom-right@pipe-a-planes.html
    - {shard-tglu}:       [SKIP][44] ([i915#1849] / [i915#3558]) -> [PASS][45] +1 similar issue
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@kms_plane@plane-panning-bottom-right@pipe-a-planes.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@kms_plane@plane-panning-bottom-right@pipe-a-planes.html

  * igt@kms_psr@cursor_blt:
    - {shard-rkl}:        [SKIP][46] ([i915#1072]) -> [PASS][47] +1 similar issue
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@kms_psr@cursor_blt.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@kms_psr@cursor_blt.html

  * igt@kms_rotation_crc@exhaust-fences:
    - {shard-rkl}:        [SKIP][48] ([i915#1845] / [i915#4098]) -> [PASS][49] +29 similar issues
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-5/igt@kms_rotation_crc@exhaust-fences.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@kms_rotation_crc@exhaust-fences.html

  * igt@kms_universal_plane@universal-plane-pipe-a-sanity:
    - {shard-tglu}:       [SKIP][50] ([fdo#109274]) -> [PASS][51] +1 similar issue
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@kms_universal_plane@universal-plane-pipe-a-sanity.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@kms_universal_plane@universal-plane-pipe-a-sanity.html
    - {shard-rkl}:        [SKIP][52] ([i915#1845] / [i915#4070] / [i915#4098]) -> [PASS][53]
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-1/igt@kms_universal_plane@universal-plane-pipe-a-sanity.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@kms_universal_plane@universal-plane-pipe-a-sanity.html

  * igt@kms_vblank@pipe-b-ts-continuation-suspend:
    - {shard-tglu}:       [SKIP][54] ([i915#7651]) -> [PASS][55] +9 similar issues
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@kms_vblank@pipe-b-ts-continuation-suspend.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@kms_vblank@pipe-b-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-c-query-forked-busy-hang:
    - {shard-tglu}:       [SKIP][56] ([i915#1845] / [i915#7651]) -> [PASS][57]
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-tglu-6/igt@kms_vblank@pipe-c-query-forked-busy-hang.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-tglu-8/igt@kms_vblank@pipe-c-query-forked-busy-hang.html

  * igt@perf@gen12-unprivileged-single-ctx-counters:
    - {shard-rkl}:        [SKIP][58] ([fdo#109289]) -> [PASS][59]
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-5/igt@perf@gen12-unprivileged-single-ctx-counters.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-6/igt@perf@gen12-unprivileged-single-ctx-counters.html

  * igt@perf@mi-rpc:
    - {shard-rkl}:        [SKIP][60] ([i915#2434]) -> [PASS][61]
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-4/igt@perf@mi-rpc.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@perf@mi-rpc.html

  * igt@prime_vgem@basic-write:
    - {shard-rkl}:        [SKIP][62] ([fdo#109295] / [i915#3291] / [i915#3708]) -> [PASS][63]
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_12585/shard-rkl-4/igt@prime_vgem@basic-write.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/shard-rkl-5/igt@prime_vgem@basic-write.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109302]: https://bugs.freedesktop.org/show_bug.cgi?id=109302
  [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308
  [fdo#109309]: https://bugs.freedesktop.org/show_bug.cgi?id=109309
  [fdo#109312]: https://bugs.freedesktop.org/show_bug.cgi?id=109312
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [fdo#109506]: https://bugs.freedesktop.org/show_bug.cgi?id=109506
  [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111644]: https://bugs.freedesktop.org/show_bug.cgi?id=111644
  [fdo#111656]: https://bugs.freedesktop.org/show_bug.cgi?id=111656
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112054]: https://bugs.freedesktop.org/show_bug.cgi?id=112054
  [fdo#112283]: https://bugs.freedesktop.org/show_bug.cgi?id=112283
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1257]: https://gitlab.freedesktop.org/drm/intel/issues/1257
  [i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132
  [i915#1397]: https://gitlab.freedesktop.org/drm/intel/issues/1397
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/intel/issues/1839
  [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
  [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
  [i915#1850]: https://gitlab.freedesktop.org/drm/intel/issues/1850
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2434]: https://gitlab.freedesktop.org/drm/intel/issues/2434
  [i915#2437]: https://gitlab.freedesktop.org/drm/intel/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/intel/issues/2527
  [i915#2532]: https://gitlab.freedesktop.org/drm/intel/issues/2532
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2582]: https://gitlab.freedesktop.org/drm/intel/issues/2582
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#2658]: https://gitlab.freedesktop.org/drm/intel/issues/2658
  [i915#2672]: https://gitlab.freedesktop.org/drm/intel/issues/2672
  [i915#2681]: https://gitlab.freedesktop.org/drm/intel/issues/2681
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#280]: https://gitlab.freedesktop.org/drm/intel/issues/280
  [i915#284]: https://gitlab.freedesktop.org/drm/intel/issues/284
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#2920]: https://gitlab.freedesktop.org/drm/intel/issues/2920
  [i915#2994]: https://gitlab.freedesktop.org/drm/intel/issues/2994
  [i915#3116]: https://gitlab.freedesktop.org/drm/intel/issues/3116
  [i915#315]: https://gitlab.freedesktop.org/drm/intel/issues/315
  [i915#3281]: https://gitlab.freedesktop.org/drm/intel/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/intel/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/intel/issues/3299
  [i915#3323]: https://gitlab.freedesktop.org/drm/intel/issues/3323
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3361]: https://gitlab.freedesktop.org/drm/intel/issues/3361
  [i915#3469]: https://gitlab.freedesktop.org/drm/intel/issues/3469
  [i915#3528]: https://gitlab.freedesktop.org/drm/intel/issues/3528
  [i915#3546]: https://gitlab.freedesktop.org/drm/intel/issues/3546
  [i915#3547]: https://gitlab.freedesktop.org/drm/intel/issues/3547
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3558]: https://gitlab.freedesktop.org/drm/intel/issues/3558
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3639]: https://gitlab.freedesktop.org/drm/intel/issues/3639
  [i915#3689]: https://gitlab.freedesktop.org/drm/intel/issues/3689
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
  [i915#3742]: https://gitlab.freedesktop.org/drm/intel/issues/3742
  [i915#3804]: https://gitlab.freedesktop.org/drm/intel/issues/3804
  [i915#3825]: https://gitlab.freedesktop.org/drm/intel/issues/3825
  [i915#3840]: https://gitlab.freedesktop.org/drm/intel/issues/3840
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3955]: https://gitlab.freedesktop.org/drm/intel/issues/3955
  [i915#404]: https://gitlab.freedesktop.org/drm/intel/issues/404
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4078]: https://gitlab.freedesktop.org/drm/intel/issues/4078
  [i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#426]: https://gitlab.freedesktop.org/drm/intel/issues/426
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4983]: https://gitlab.freedesktop.org/drm/intel/issues/4983
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5213]: https://gitlab.freedesktop.org/drm/intel/issues/5213
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5288]: https://gitlab.freedesktop.org/drm/intel/issues/5288
  [i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
  [i915#5325]: https://gitlab.freedesktop.org/drm/intel/issues/5325
  [i915#5327]: https://gitlab.freedesktop.org/drm/intel/issues/5327
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#5334]: https://gitlab.freedesktop.org/drm/intel/issues/5334
  [i915#5439]: https://gitlab.freedesktop.org/drm/intel/issues/5439
  [i915#5461]: https://gitlab.freedesktop.org/drm/intel/issues/5461
  [i915#6095]: https://gitlab.freedesktop.org/drm/intel/issues/6095
  [i915#6227]: https://gitlab.freedesktop.org/drm/intel/issues/6227
  [i915#6247]: https://gitlab.freedesktop.org/drm/intel/issues/6247
  [i915#6248]: https://gitlab.freedesktop.org/drm/intel/issues/6248
  [i915#6301]: https://gitlab.freedesktop.org/drm/intel/issues/6301
  [i915#6335]: https://gitlab.freedesktop.org/drm/intel/issues/6335
  [i915#6344]: https://gitlab.freedesktop.org/drm/intel/issues/6344
  [i915#6412]: https://gitlab.freedesktop.org/drm/intel/issues/6412
  [i915#6433]: https://gitlab.freedesktop.org/drm/intel/issues/6433
  [i915#6497]: https://gitlab.freedesktop.org/drm/intel/issues/6497
  [i915#6524]: https://gitlab.freedesktop.org/drm/intel/issues/6524
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#6724]: https://gitlab.freedesktop.org/drm/intel/issues/6724
  [i915#6768]: https://gitlab.freedesktop.org/drm/intel/issues/6768
  [i915#6944]: https://gitlab.freedesktop.org/drm/intel/issues/6944
  [i915#6946]: https://gitlab.freedesktop.org/drm/intel/issues/6946
  [i915#6953]: https://gitlab.freedesktop.org/drm/intel/issues/6953
  [i915#7037]: https://gitlab.freedesktop.org/drm/intel/issues/7037
  [i915#7116]: https://gitlab.freedesktop.org/drm/intel/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/intel/issues/7118
  [i915#7276]: https://gitlab.freedesktop.org/drm/intel/issues/7276
  [i915#7456]: https://gitlab.freedesktop.org/drm/intel/issues/7456
  [i915#7561]: https://gitlab.freedesktop.org/drm/intel/issues/7561
  [i915#7651]: https://gitlab.freedesktop.org/drm/intel/issues/7651
  [i915#7679]: https://gitlab.freedesktop.org/drm/intel/issues/7679
  [i915#7697]: https://gitlab.freedesktop.org/drm/intel/issues/7697
  [i915#7701]: https://gitlab.freedesktop.org/drm/intel/issues/7701
  [i915#7742]: https://gitlab.freedesktop.org/drm/intel/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/intel/issues/7828
  [i915#7863]: https://gitlab.freedesktop.org/drm/intel/issues/7863


Build changes
-------------

  * Linux: CI_DRM_12585 -> Patchwork_112759v3
  * Piglit: piglit_4509 -> None

  CI-20190529: 20190529
  CI_DRM_12585: 68d139b609a97a83e7c231189d4864aba4e1679b @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_7119: 1e6d24e6dfa42b22f950f7d5e436b8f9acf8747f @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_112759v3: 68d139b609a97a83e7c231189d4864aba4e1679b @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_112759v3/index.html

[-- Attachment #2: Type: text/html, Size: 16912 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
@ 2023-01-16 15:03   ` Jason Gunthorpe
  -1 siblings, 0 replies; 20+ messages in thread
From: Jason Gunthorpe @ 2023-01-16 15:03 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	seanjc, intel-gfx, cohuck, linux-kernel, pasic, kvm, pbonzini,
	linux-s390, borntraeger, intel-gvt-dev

On Fri, Jan 13, 2023 at 07:03:51PM -0500, Matthew Rosato wrote:
> Currently it is possible that the final put of a KVM reference comes from
> vfio during its device close operation.  This occurs while the vfio group
> lock is held; however, if the vfio device is still in the kvm device list,
> then the following call chain could result in a deadlock:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem
> 
> Avoid this scenario by having vfio core code acquire a KVM reference
> the first time a device is opened and hold that reference until right
> after the group lock is released after the last device is closed.
> 
> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
> ---

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-16 15:03   ` Jason Gunthorpe
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Gunthorpe @ 2023-01-16 15:03 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: alex.williamson, pbonzini, cohuck, farman, pmorel, borntraeger,
	frankja, imbrenda, david, akrowiak, jjherne, pasic, zhenyuw,
	zhi.a.wang, seanjc, linux-s390, kvm, intel-gvt-dev, intel-gfx,
	linux-kernel

On Fri, Jan 13, 2023 at 07:03:51PM -0500, Matthew Rosato wrote:
> Currently it is possible that the final put of a KVM reference comes from
> vfio during its device close operation.  This occurs while the vfio group
> lock is held; however, if the vfio device is still in the kvm device list,
> then the following call chain could result in a deadlock:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem
> 
> Avoid this scenario by having vfio core code acquire a KVM reference
> the first time a device is opened and hold that reference until right
> after the group lock is released after the last device is closed.
> 
> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
> ---

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
@ 2023-01-17  9:05   ` Tian, Kevin
  -1 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-17  9:05 UTC (permalink / raw)
  To: Matthew Rosato, alex.williamson, pbonzini
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	Christopherson, ,
	Sean, intel-gfx, cohuck, linux-kernel, pasic, jgg, kvm,
	linux-s390, borntraeger, intel-gvt-dev

> From: Matthew Rosato <mjrosato@linux.ibm.com>
> Sent: Saturday, January 14, 2023 8:04 AM
>
>  void vfio_device_group_close(struct vfio_device *device)
>  {
> +	void (*put_kvm)(struct kvm *kvm);
> +	struct kvm *kvm;
> +
>  	mutex_lock(&device->group->group_lock);
> +	kvm = device->kvm;
> +	put_kvm = device->put_kvm;
>  	vfio_device_close(device, device->group->iommufd);
> +	if (kvm == device->kvm)
> +		kvm = NULL;

Add a simple comment that this check is to detect the last close

> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
> +{
> +	if (WARN_ON(!put))
> +		return;

also WARN_ON(!kvm)?

otherwise this looks good to me:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-17  9:05   ` Tian, Kevin
  0 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-17  9:05 UTC (permalink / raw)
  To: Matthew Rosato, alex.williamson, pbonzini
  Cc: jgg, cohuck, farman, pmorel, borntraeger, frankja, imbrenda,
	david, akrowiak, jjherne, pasic, zhenyuw, Wang, Zhi A,
	Christopherson,,
	Sean, linux-s390, kvm, intel-gvt-dev, intel-gfx, linux-kernel

> From: Matthew Rosato <mjrosato@linux.ibm.com>
> Sent: Saturday, January 14, 2023 8:04 AM
>
>  void vfio_device_group_close(struct vfio_device *device)
>  {
> +	void (*put_kvm)(struct kvm *kvm);
> +	struct kvm *kvm;
> +
>  	mutex_lock(&device->group->group_lock);
> +	kvm = device->kvm;
> +	put_kvm = device->put_kvm;
>  	vfio_device_close(device, device->group->iommufd);
> +	if (kvm == device->kvm)
> +		kvm = NULL;

Add a simple comment that this check is to detect the last close

> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
> +{
> +	if (WARN_ON(!put))
> +		return;

also WARN_ON(!kvm)?

otherwise this looks good to me:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
@ 2023-01-17 21:22   ` Alex Williamson
  -1 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2023-01-17 21:22 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	seanjc, intel-gfx, cohuck, linux-kernel, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, intel-gvt-dev

On Fri, 13 Jan 2023 19:03:51 -0500
Matthew Rosato <mjrosato@linux.ibm.com> wrote:

> Currently it is possible that the final put of a KVM reference comes from
> vfio during its device close operation.  This occurs while the vfio group
> lock is held; however, if the vfio device is still in the kvm device list,
> then the following call chain could result in a deadlock:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem  
> 
> Avoid this scenario by having vfio core code acquire a KVM reference
> the first time a device is opened and hold that reference until right
> after the group lock is released after the last device is closed.
> 
> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
> ---
> Changes from v3:
> * Can't check for open_count after the group lock has been dropped because
>   it would be possible for the count to change again once the group lock
>   is dropped (Jason)
>   Solve this by stashing a copy of the kvm and put_kvm while the group
>   lock is held, nullifying the device copies of these in device_close()
>   as soon as the open_count reaches 0, and then checking to see if the
>   device->kvm changed before dropping the group lock.  If it changed
>   during close, we can drop the reference using the stashed kvm and put
>   function after dropping the group lock.
> 
> Changes from v2:
> * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
>   device_open with device->kvm=NULL (Alex)
> * get device->dev_set->lock when checking device->open_count (Alex)
> * but don't hold it over the kvm_put_kvm (Jason)
> * get kvm_put symbol upfront and stash it in device until close (Jason)
> * check CONFIG_HAVE_KVM to avoid build errors on architectures without
>   KVM support
> 
> Changes from v1:
> * Re-write using symbol get logic to get kvm ref during first device
>   open, release the ref during device fd close after group lock is
>   released
> * Drop kvm get/put changes to drivers; now that vfio core holds a
>   kvm ref until sometime after the device_close op is called, it
>   should be fine for drivers to get and put their own references to it.
> ---
>  drivers/vfio/group.c     | 23 +++++++++++++--
>  drivers/vfio/vfio.h      |  9 ++++++
>  drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
>  include/linux/vfio.h     |  2 +-
>  4 files changed, 87 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
> index bb24b2f0271e..b396c17d7390 100644
> --- a/drivers/vfio/group.c
> +++ b/drivers/vfio/group.c
> @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
>  	}
>  
>  	/*
> -	 * Here we pass the KVM pointer with the group under the lock.  If the
> -	 * device driver will use it, it must obtain a reference and release it
> -	 * during close_device.
> +	 * Here we pass the KVM pointer with the group under the lock.  A
> +	 * reference will be obtained the first time the device is opened and
> +	 * will be held until the open_count reaches 0.
>  	 */
>  	ret = vfio_device_open(device, device->group->iommufd,
>  			       device->group->kvm);
> @@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
>  
>  void vfio_device_group_close(struct vfio_device *device)
>  {
> +	void (*put_kvm)(struct kvm *kvm);
> +	struct kvm *kvm;
> +
>  	mutex_lock(&device->group->group_lock);
> +	kvm = device->kvm;
> +	put_kvm = device->put_kvm;
>  	vfio_device_close(device, device->group->iommufd);
> +	if (kvm == device->kvm)
> +		kvm = NULL;

Hmm, so we're using whether the device->kvm pointer gets cleared in
last_close to detect whether we should put the kvm reference.  That's a
bit obscure.  Our get and put is also asymmetric.

Did we decide that we couldn't do this via a schedule_work() from the
last_close function, ie. implementing our own version of an async put?
It seems like that potentially has a cleaner implementation, symmetric
call points, handling all the storing and clearing of kvm related
pointers within the get/put wrappers, passing only a vfio_device to the
put wrapper, using the "vfio_device_" prefix for both.  Potentially
we'd just want an unconditional flush outside of lock here for
deterministic release.

What's the downside?  Thanks,

Alex

>  	mutex_unlock(&device->group->group_lock);
> +
> +	/*
> +	 * The last kvm reference will trigger kvm_destroy_vm, which
> can in
> +	 * turn re-enter vfio and attempt to acquire the group lock.
>  Therefore
> +	 * we get a copy of the kvm pointer and the put function
> under the
> +	 * group lock but wait to put that reference until after
> releasing the
> +	 * lock.
> +	 */
> +	if (kvm)
> +		vfio_kvm_put_kvm(put_kvm, kvm);
>  }
>  
>  static struct file *vfio_device_open_file(struct vfio_device *device)
> diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
> index f8219a438bfb..08a5a23d6fef 100644
> --- a/drivers/vfio/vfio.h
> +++ b/drivers/vfio/vfio.h
> @@ -251,4 +251,13 @@ extern bool vfio_noiommu __read_mostly;
>  enum { vfio_noiommu = false };
>  #endif
>  
> +#ifdef CONFIG_HAVE_KVM
> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm);
> +#else
> +static inline void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm),
> +				    struct kvm *kvm)
> +{
> +}
> +#endif
> +
>  #endif
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index 5177bb061b17..c6bb07af46b8 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -16,6 +16,9 @@
>  #include <linux/fs.h>
>  #include <linux/idr.h>
>  #include <linux/iommu.h>
> +#ifdef CONFIG_HAVE_KVM
> +#include <linux/kvm_host.h>
> +#endif
>  #include <linux/list.h>
>  #include <linux/miscdevice.h>
>  #include <linux/module.h>
> @@ -344,6 +347,49 @@ static bool vfio_assert_device_open(struct
> vfio_device *device) return
> !WARN_ON_ONCE(!READ_ONCE(device->open_count)); }
>  
> +#ifdef CONFIG_HAVE_KVM
> +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct
> kvm *kvm) +{
> +	void (*pfn)(struct kvm *kvm);
> +	bool (*fn)(struct kvm *kvm);
> +	bool ret;
> +
> +	pfn = symbol_get(kvm_put_kvm);
> +	if (WARN_ON(!pfn))
> +		return false;
> +
> +	fn = symbol_get(kvm_get_kvm_safe);
> +	if (WARN_ON(!fn)) {
> +		symbol_put(kvm_put_kvm);
> +		return false;
> +	}
> +
> +	ret = fn(kvm);
> +	if (ret)
> +		device->put_kvm = pfn;
> +	else
> +		symbol_put(kvm_put_kvm);
> +
> +	symbol_put(kvm_get_kvm_safe);
> +
> +	return ret;
> +}
> +
> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
> +{
> +	if (WARN_ON(!put))
> +		return;
> +
> +	put(kvm);
> +	symbol_put(kvm_put_kvm);
> +}
> +#else
> +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct
> kvm *kvm) +{
> +	return false;
> +}
> +#endif
> +
>  static int vfio_device_first_open(struct vfio_device *device,
>  				  struct iommufd_ctx *iommufd,
> struct kvm *kvm) {
> @@ -361,16 +407,22 @@ static int vfio_device_first_open(struct
> vfio_device *device, if (ret)
>  		goto err_module_put;
>  
> -	device->kvm = kvm;
> +	if (kvm && vfio_kvm_get_kvm_safe(device, kvm))
> +		device->kvm = kvm;
> +
>  	if (device->ops->open_device) {
>  		ret = device->ops->open_device(device);
>  		if (ret)
> -			goto err_unuse_iommu;
> +			goto err_put_kvm;
>  	}
>  	return 0;
>  
> -err_unuse_iommu:
> -	device->kvm = NULL;
> +err_put_kvm:
> +	if (device->kvm) {
> +		vfio_kvm_put_kvm(device->put_kvm, device->kvm);
> +		device->put_kvm = NULL;
> +		device->kvm = NULL;
> +	}
>  	if (iommufd)
>  		vfio_iommufd_unbind(device);
>  	else
> @@ -388,6 +440,7 @@ static void vfio_device_last_close(struct
> vfio_device *device, if (device->ops->close_device)
>  		device->ops->close_device(device);
>  	device->kvm = NULL;
> +	device->put_kvm = NULL;
>  	if (iommufd)
>  		vfio_iommufd_unbind(device);
>  	else
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 35be78e9ae57..87ff862ff555 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -46,7 +46,6 @@ struct vfio_device {
>  	struct vfio_device_set *dev_set;
>  	struct list_head dev_set_list;
>  	unsigned int migration_flags;
> -	/* Driver must reference the kvm during open_device or never
> touch it */ struct kvm *kvm;
>  
>  	/* Members below here are private, not for driver use */
> @@ -58,6 +57,7 @@ struct vfio_device {
>  	struct list_head group_next;
>  	struct list_head iommu_entry;
>  	struct iommufd_access *iommufd_access;
> +	void (*put_kvm)(struct kvm *kvm);
>  #if IS_ENABLED(CONFIG_IOMMUFD)
>  	struct iommufd_device *iommufd_device;
>  	struct iommufd_ctx *iommufd_ictx;


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-17 21:22   ` Alex Williamson
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2023-01-17 21:22 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: pbonzini, jgg, cohuck, farman, pmorel, borntraeger, frankja,
	imbrenda, david, akrowiak, jjherne, pasic, zhenyuw, zhi.a.wang,
	seanjc, linux-s390, kvm, intel-gvt-dev, intel-gfx, linux-kernel

On Fri, 13 Jan 2023 19:03:51 -0500
Matthew Rosato <mjrosato@linux.ibm.com> wrote:

> Currently it is possible that the final put of a KVM reference comes from
> vfio during its device close operation.  This occurs while the vfio group
> lock is held; however, if the vfio device is still in the kvm device list,
> then the following call chain could result in a deadlock:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem  
> 
> Avoid this scenario by having vfio core code acquire a KVM reference
> the first time a device is opened and hold that reference until right
> after the group lock is released after the last device is closed.
> 
> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
> ---
> Changes from v3:
> * Can't check for open_count after the group lock has been dropped because
>   it would be possible for the count to change again once the group lock
>   is dropped (Jason)
>   Solve this by stashing a copy of the kvm and put_kvm while the group
>   lock is held, nullifying the device copies of these in device_close()
>   as soon as the open_count reaches 0, and then checking to see if the
>   device->kvm changed before dropping the group lock.  If it changed
>   during close, we can drop the reference using the stashed kvm and put
>   function after dropping the group lock.
> 
> Changes from v2:
> * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
>   device_open with device->kvm=NULL (Alex)
> * get device->dev_set->lock when checking device->open_count (Alex)
> * but don't hold it over the kvm_put_kvm (Jason)
> * get kvm_put symbol upfront and stash it in device until close (Jason)
> * check CONFIG_HAVE_KVM to avoid build errors on architectures without
>   KVM support
> 
> Changes from v1:
> * Re-write using symbol get logic to get kvm ref during first device
>   open, release the ref during device fd close after group lock is
>   released
> * Drop kvm get/put changes to drivers; now that vfio core holds a
>   kvm ref until sometime after the device_close op is called, it
>   should be fine for drivers to get and put their own references to it.
> ---
>  drivers/vfio/group.c     | 23 +++++++++++++--
>  drivers/vfio/vfio.h      |  9 ++++++
>  drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
>  include/linux/vfio.h     |  2 +-
>  4 files changed, 87 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
> index bb24b2f0271e..b396c17d7390 100644
> --- a/drivers/vfio/group.c
> +++ b/drivers/vfio/group.c
> @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
>  	}
>  
>  	/*
> -	 * Here we pass the KVM pointer with the group under the lock.  If the
> -	 * device driver will use it, it must obtain a reference and release it
> -	 * during close_device.
> +	 * Here we pass the KVM pointer with the group under the lock.  A
> +	 * reference will be obtained the first time the device is opened and
> +	 * will be held until the open_count reaches 0.
>  	 */
>  	ret = vfio_device_open(device, device->group->iommufd,
>  			       device->group->kvm);
> @@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
>  
>  void vfio_device_group_close(struct vfio_device *device)
>  {
> +	void (*put_kvm)(struct kvm *kvm);
> +	struct kvm *kvm;
> +
>  	mutex_lock(&device->group->group_lock);
> +	kvm = device->kvm;
> +	put_kvm = device->put_kvm;
>  	vfio_device_close(device, device->group->iommufd);
> +	if (kvm == device->kvm)
> +		kvm = NULL;

Hmm, so we're using whether the device->kvm pointer gets cleared in
last_close to detect whether we should put the kvm reference.  That's a
bit obscure.  Our get and put is also asymmetric.

Did we decide that we couldn't do this via a schedule_work() from the
last_close function, ie. implementing our own version of an async put?
It seems like that potentially has a cleaner implementation, symmetric
call points, handling all the storing and clearing of kvm related
pointers within the get/put wrappers, passing only a vfio_device to the
put wrapper, using the "vfio_device_" prefix for both.  Potentially
we'd just want an unconditional flush outside of lock here for
deterministic release.

What's the downside?  Thanks,

Alex

>  	mutex_unlock(&device->group->group_lock);
> +
> +	/*
> +	 * The last kvm reference will trigger kvm_destroy_vm, which
> can in
> +	 * turn re-enter vfio and attempt to acquire the group lock.
>  Therefore
> +	 * we get a copy of the kvm pointer and the put function
> under the
> +	 * group lock but wait to put that reference until after
> releasing the
> +	 * lock.
> +	 */
> +	if (kvm)
> +		vfio_kvm_put_kvm(put_kvm, kvm);
>  }
>  
>  static struct file *vfio_device_open_file(struct vfio_device *device)
> diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
> index f8219a438bfb..08a5a23d6fef 100644
> --- a/drivers/vfio/vfio.h
> +++ b/drivers/vfio/vfio.h
> @@ -251,4 +251,13 @@ extern bool vfio_noiommu __read_mostly;
>  enum { vfio_noiommu = false };
>  #endif
>  
> +#ifdef CONFIG_HAVE_KVM
> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm);
> +#else
> +static inline void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm),
> +				    struct kvm *kvm)
> +{
> +}
> +#endif
> +
>  #endif
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index 5177bb061b17..c6bb07af46b8 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -16,6 +16,9 @@
>  #include <linux/fs.h>
>  #include <linux/idr.h>
>  #include <linux/iommu.h>
> +#ifdef CONFIG_HAVE_KVM
> +#include <linux/kvm_host.h>
> +#endif
>  #include <linux/list.h>
>  #include <linux/miscdevice.h>
>  #include <linux/module.h>
> @@ -344,6 +347,49 @@ static bool vfio_assert_device_open(struct
> vfio_device *device) return
> !WARN_ON_ONCE(!READ_ONCE(device->open_count)); }
>  
> +#ifdef CONFIG_HAVE_KVM
> +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct
> kvm *kvm) +{
> +	void (*pfn)(struct kvm *kvm);
> +	bool (*fn)(struct kvm *kvm);
> +	bool ret;
> +
> +	pfn = symbol_get(kvm_put_kvm);
> +	if (WARN_ON(!pfn))
> +		return false;
> +
> +	fn = symbol_get(kvm_get_kvm_safe);
> +	if (WARN_ON(!fn)) {
> +		symbol_put(kvm_put_kvm);
> +		return false;
> +	}
> +
> +	ret = fn(kvm);
> +	if (ret)
> +		device->put_kvm = pfn;
> +	else
> +		symbol_put(kvm_put_kvm);
> +
> +	symbol_put(kvm_get_kvm_safe);
> +
> +	return ret;
> +}
> +
> +void vfio_kvm_put_kvm(void (*put)(struct kvm *kvm), struct kvm *kvm)
> +{
> +	if (WARN_ON(!put))
> +		return;
> +
> +	put(kvm);
> +	symbol_put(kvm_put_kvm);
> +}
> +#else
> +static bool vfio_kvm_get_kvm_safe(struct vfio_device *device, struct
> kvm *kvm) +{
> +	return false;
> +}
> +#endif
> +
>  static int vfio_device_first_open(struct vfio_device *device,
>  				  struct iommufd_ctx *iommufd,
> struct kvm *kvm) {
> @@ -361,16 +407,22 @@ static int vfio_device_first_open(struct
> vfio_device *device, if (ret)
>  		goto err_module_put;
>  
> -	device->kvm = kvm;
> +	if (kvm && vfio_kvm_get_kvm_safe(device, kvm))
> +		device->kvm = kvm;
> +
>  	if (device->ops->open_device) {
>  		ret = device->ops->open_device(device);
>  		if (ret)
> -			goto err_unuse_iommu;
> +			goto err_put_kvm;
>  	}
>  	return 0;
>  
> -err_unuse_iommu:
> -	device->kvm = NULL;
> +err_put_kvm:
> +	if (device->kvm) {
> +		vfio_kvm_put_kvm(device->put_kvm, device->kvm);
> +		device->put_kvm = NULL;
> +		device->kvm = NULL;
> +	}
>  	if (iommufd)
>  		vfio_iommufd_unbind(device);
>  	else
> @@ -388,6 +440,7 @@ static void vfio_device_last_close(struct
> vfio_device *device, if (device->ops->close_device)
>  		device->ops->close_device(device);
>  	device->kvm = NULL;
> +	device->put_kvm = NULL;
>  	if (iommufd)
>  		vfio_iommufd_unbind(device);
>  	else
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 35be78e9ae57..87ff862ff555 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -46,7 +46,6 @@ struct vfio_device {
>  	struct vfio_device_set *dev_set;
>  	struct list_head dev_set_list;
>  	unsigned int migration_flags;
> -	/* Driver must reference the kvm during open_device or never
> touch it */ struct kvm *kvm;
>  
>  	/* Members below here are private, not for driver use */
> @@ -58,6 +57,7 @@ struct vfio_device {
>  	struct list_head group_next;
>  	struct list_head iommu_entry;
>  	struct iommufd_access *iommufd_access;
> +	void (*put_kvm)(struct kvm *kvm);
>  #if IS_ENABLED(CONFIG_IOMMUFD)
>  	struct iommufd_device *iommufd_device;
>  	struct iommufd_ctx *iommufd_ictx;


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-17 21:22   ` Alex Williamson
@ 2023-01-18  9:03     ` Tian, Kevin
  -1 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-18  9:03 UTC (permalink / raw)
  To: Alex Williamson, Matthew Rosato
  Cc: akrowiak, jjherne, farman, borntraeger, frankja, pmorel, david,
	Christopherson, ,
	Sean, intel-gfx, cohuck, linux-kernel, pasic, Liu, Yi L, jgg,
	kvm, pbonzini, linux-s390, imbrenda, intel-gvt-dev

> From: Alex Williamson
> Sent: Wednesday, January 18, 2023 5:23 AM
> 
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> 
> >  void vfio_device_group_close(struct vfio_device *device)
> >  {
> > +	void (*put_kvm)(struct kvm *kvm);
> > +	struct kvm *kvm;
> > +
> >  	mutex_lock(&device->group->group_lock);
> > +	kvm = device->kvm;
> > +	put_kvm = device->put_kvm;
> >  	vfio_device_close(device, device->group->iommufd);
> > +	if (kvm == device->kvm)
> > +		kvm = NULL;
> 
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference.  That's a
> bit obscure.  Our get and put is also asymmetric.
> 
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
> 
> What's the downside?  Thanks,
> 

btw I guess this can be also fixed by Yi's work here:

https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/

with set_kvm(NULL) moved to the release callback of kvm_vfio device,
such circular lock dependency can be avoided too.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-18  9:03     ` Tian, Kevin
  0 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-18  9:03 UTC (permalink / raw)
  To: Alex Williamson, Matthew Rosato
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	Christopherson,,
	Sean, intel-gfx, cohuck, linux-kernel, zhenyuw, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, Liu, Yi L, intel-gvt-dev,
	Wang, Zhi A

> From: Alex Williamson
> Sent: Wednesday, January 18, 2023 5:23 AM
> 
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> 
> >  void vfio_device_group_close(struct vfio_device *device)
> >  {
> > +	void (*put_kvm)(struct kvm *kvm);
> > +	struct kvm *kvm;
> > +
> >  	mutex_lock(&device->group->group_lock);
> > +	kvm = device->kvm;
> > +	put_kvm = device->put_kvm;
> >  	vfio_device_close(device, device->group->iommufd);
> > +	if (kvm == device->kvm)
> > +		kvm = NULL;
> 
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference.  That's a
> bit obscure.  Our get and put is also asymmetric.
> 
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
> 
> What's the downside?  Thanks,
> 

btw I guess this can be also fixed by Yi's work here:

https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/

with set_kvm(NULL) moved to the release callback of kvm_vfio device,
such circular lock dependency can be avoided too.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-17 21:22   ` Alex Williamson
@ 2023-01-18 14:15     ` Matthew Rosato
  -1 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-18 14:15 UTC (permalink / raw)
  To: Alex Williamson
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	seanjc, intel-gfx, cohuck, linux-kernel, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, intel-gvt-dev

On 1/17/23 4:22 PM, Alex Williamson wrote:
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> 
>> Currently it is possible that the final put of a KVM reference comes from
>> vfio during its device close operation.  This occurs while the vfio group
>> lock is held; however, if the vfio device is still in the kvm device list,
>> then the following call chain could result in a deadlock:
>>
>> kvm_put_kvm
>>  -> kvm_destroy_vm
>>   -> kvm_destroy_devices
>>    -> kvm_vfio_destroy
>>     -> kvm_vfio_file_set_kvm
>>      -> vfio_file_set_kvm
>>       -> group->group_lock/group_rwsem  
>>
>> Avoid this scenario by having vfio core code acquire a KVM reference
>> the first time a device is opened and hold that reference until right
>> after the group lock is released after the last device is closed.
>>
>> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
>> Reported-by: Alex Williamson <alex.williamson@redhat.com>
>> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
>> ---
>> Changes from v3:
>> * Can't check for open_count after the group lock has been dropped because
>>   it would be possible for the count to change again once the group lock
>>   is dropped (Jason)
>>   Solve this by stashing a copy of the kvm and put_kvm while the group
>>   lock is held, nullifying the device copies of these in device_close()
>>   as soon as the open_count reaches 0, and then checking to see if the
>>   device->kvm changed before dropping the group lock.  If it changed
>>   during close, we can drop the reference using the stashed kvm and put
>>   function after dropping the group lock.
>>
>> Changes from v2:
>> * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
>>   device_open with device->kvm=NULL (Alex)
>> * get device->dev_set->lock when checking device->open_count (Alex)
>> * but don't hold it over the kvm_put_kvm (Jason)
>> * get kvm_put symbol upfront and stash it in device until close (Jason)
>> * check CONFIG_HAVE_KVM to avoid build errors on architectures without
>>   KVM support
>>
>> Changes from v1:
>> * Re-write using symbol get logic to get kvm ref during first device
>>   open, release the ref during device fd close after group lock is
>>   released
>> * Drop kvm get/put changes to drivers; now that vfio core holds a
>>   kvm ref until sometime after the device_close op is called, it
>>   should be fine for drivers to get and put their own references to it.
>> ---
>>  drivers/vfio/group.c     | 23 +++++++++++++--
>>  drivers/vfio/vfio.h      |  9 ++++++
>>  drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
>>  include/linux/vfio.h     |  2 +-
>>  4 files changed, 87 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
>> index bb24b2f0271e..b396c17d7390 100644
>> --- a/drivers/vfio/group.c
>> +++ b/drivers/vfio/group.c
>> @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
>>  	}
>>  
>>  	/*
>> -	 * Here we pass the KVM pointer with the group under the lock.  If the
>> -	 * device driver will use it, it must obtain a reference and release it
>> -	 * during close_device.
>> +	 * Here we pass the KVM pointer with the group under the lock.  A
>> +	 * reference will be obtained the first time the device is opened and
>> +	 * will be held until the open_count reaches 0.
>>  	 */
>>  	ret = vfio_device_open(device, device->group->iommufd,
>>  			       device->group->kvm);
>> @@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
>>  
>>  void vfio_device_group_close(struct vfio_device *device)
>>  {
>> +	void (*put_kvm)(struct kvm *kvm);
>> +	struct kvm *kvm;
>> +
>>  	mutex_lock(&device->group->group_lock);
>> +	kvm = device->kvm;
>> +	put_kvm = device->put_kvm;
>>  	vfio_device_close(device, device->group->iommufd);
>> +	if (kvm == device->kvm)
>> +		kvm = NULL;
> 
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference.  That's a
> bit obscure.  Our get and put is also asymmetric.
> 
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
> 
> What's the downside?  Thanks,
> 


I did mention something like this as a possibility when discussing v3..  It's doable, the main issue with doing schedule_work() of an async put is that we can't actually use the device->kvm / device->put_kvm values during the scheduled work task as they become unreliable once we drop the group lock -- e.g. schedule_work() some put call while under the group lock, drop the group lock and then another thread gets the group lock and does a new open_device() before that async put task fires; device->kvm and (less likely) put_kvm might have changed in between.

I think in that case we would need to stash the kvm and put_kvm values in some secondary structure to be processed off a queue by the schedule_work task (an example of what I mean would be bio_dirty_list in block/bio.c).  Very unlikely that this queue would ever have more than 1 element in it.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-18 14:15     ` Matthew Rosato
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-18 14:15 UTC (permalink / raw)
  To: Alex Williamson
  Cc: pbonzini, jgg, cohuck, farman, pmorel, borntraeger, frankja,
	imbrenda, david, akrowiak, jjherne, pasic, zhenyuw, zhi.a.wang,
	seanjc, linux-s390, kvm, intel-gvt-dev, intel-gfx, linux-kernel

On 1/17/23 4:22 PM, Alex Williamson wrote:
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> 
>> Currently it is possible that the final put of a KVM reference comes from
>> vfio during its device close operation.  This occurs while the vfio group
>> lock is held; however, if the vfio device is still in the kvm device list,
>> then the following call chain could result in a deadlock:
>>
>> kvm_put_kvm
>>  -> kvm_destroy_vm
>>   -> kvm_destroy_devices
>>    -> kvm_vfio_destroy
>>     -> kvm_vfio_file_set_kvm
>>      -> vfio_file_set_kvm
>>       -> group->group_lock/group_rwsem  
>>
>> Avoid this scenario by having vfio core code acquire a KVM reference
>> the first time a device is opened and hold that reference until right
>> after the group lock is released after the last device is closed.
>>
>> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
>> Reported-by: Alex Williamson <alex.williamson@redhat.com>
>> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
>> ---
>> Changes from v3:
>> * Can't check for open_count after the group lock has been dropped because
>>   it would be possible for the count to change again once the group lock
>>   is dropped (Jason)
>>   Solve this by stashing a copy of the kvm and put_kvm while the group
>>   lock is held, nullifying the device copies of these in device_close()
>>   as soon as the open_count reaches 0, and then checking to see if the
>>   device->kvm changed before dropping the group lock.  If it changed
>>   during close, we can drop the reference using the stashed kvm and put
>>   function after dropping the group lock.
>>
>> Changes from v2:
>> * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
>>   device_open with device->kvm=NULL (Alex)
>> * get device->dev_set->lock when checking device->open_count (Alex)
>> * but don't hold it over the kvm_put_kvm (Jason)
>> * get kvm_put symbol upfront and stash it in device until close (Jason)
>> * check CONFIG_HAVE_KVM to avoid build errors on architectures without
>>   KVM support
>>
>> Changes from v1:
>> * Re-write using symbol get logic to get kvm ref during first device
>>   open, release the ref during device fd close after group lock is
>>   released
>> * Drop kvm get/put changes to drivers; now that vfio core holds a
>>   kvm ref until sometime after the device_close op is called, it
>>   should be fine for drivers to get and put their own references to it.
>> ---
>>  drivers/vfio/group.c     | 23 +++++++++++++--
>>  drivers/vfio/vfio.h      |  9 ++++++
>>  drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
>>  include/linux/vfio.h     |  2 +-
>>  4 files changed, 87 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
>> index bb24b2f0271e..b396c17d7390 100644
>> --- a/drivers/vfio/group.c
>> +++ b/drivers/vfio/group.c
>> @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
>>  	}
>>  
>>  	/*
>> -	 * Here we pass the KVM pointer with the group under the lock.  If the
>> -	 * device driver will use it, it must obtain a reference and release it
>> -	 * during close_device.
>> +	 * Here we pass the KVM pointer with the group under the lock.  A
>> +	 * reference will be obtained the first time the device is opened and
>> +	 * will be held until the open_count reaches 0.
>>  	 */
>>  	ret = vfio_device_open(device, device->group->iommufd,
>>  			       device->group->kvm);
>> @@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
>>  
>>  void vfio_device_group_close(struct vfio_device *device)
>>  {
>> +	void (*put_kvm)(struct kvm *kvm);
>> +	struct kvm *kvm;
>> +
>>  	mutex_lock(&device->group->group_lock);
>> +	kvm = device->kvm;
>> +	put_kvm = device->put_kvm;
>>  	vfio_device_close(device, device->group->iommufd);
>> +	if (kvm == device->kvm)
>> +		kvm = NULL;
> 
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference.  That's a
> bit obscure.  Our get and put is also asymmetric.
> 
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
> 
> What's the downside?  Thanks,
> 


I did mention something like this as a possibility when discussing v3..  It's doable, the main issue with doing schedule_work() of an async put is that we can't actually use the device->kvm / device->put_kvm values during the scheduled work task as they become unreliable once we drop the group lock -- e.g. schedule_work() some put call while under the group lock, drop the group lock and then another thread gets the group lock and does a new open_device() before that async put task fires; device->kvm and (less likely) put_kvm might have changed in between.

I think in that case we would need to stash the kvm and put_kvm values in some secondary structure to be processed off a queue by the schedule_work task (an example of what I mean would be bio_dirty_list in block/bio.c).  Very unlikely that this queue would ever have more than 1 element in it.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-18  9:03     ` Tian, Kevin
@ 2023-01-18 14:55       ` Matthew Rosato
  -1 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-18 14:55 UTC (permalink / raw)
  To: Tian, Kevin, Alex Williamson
  Cc: akrowiak, jjherne, farman, borntraeger, frankja, pmorel, david,
	Christopherson, ,
	Sean, intel-gfx, cohuck, linux-kernel, pasic, Liu, Yi L, jgg,
	kvm, pbonzini, linux-s390, imbrenda, intel-gvt-dev

On 1/18/23 4:03 AM, Tian, Kevin wrote:
>> From: Alex Williamson
>> Sent: Wednesday, January 18, 2023 5:23 AM
>>
>> On Fri, 13 Jan 2023 19:03:51 -0500
>> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
>>
>>>  void vfio_device_group_close(struct vfio_device *device)
>>>  {
>>> +	void (*put_kvm)(struct kvm *kvm);
>>> +	struct kvm *kvm;
>>> +
>>>  	mutex_lock(&device->group->group_lock);
>>> +	kvm = device->kvm;
>>> +	put_kvm = device->put_kvm;
>>>  	vfio_device_close(device, device->group->iommufd);
>>> +	if (kvm == device->kvm)
>>> +		kvm = NULL;
>>
>> Hmm, so we're using whether the device->kvm pointer gets cleared in
>> last_close to detect whether we should put the kvm reference.  That's a
>> bit obscure.  Our get and put is also asymmetric.
>>
>> Did we decide that we couldn't do this via a schedule_work() from the
>> last_close function, ie. implementing our own version of an async put?
>> It seems like that potentially has a cleaner implementation, symmetric
>> call points, handling all the storing and clearing of kvm related
>> pointers within the get/put wrappers, passing only a vfio_device to the
>> put wrapper, using the "vfio_device_" prefix for both.  Potentially
>> we'd just want an unconditional flush outside of lock here for
>> deterministic release.
>>
>> What's the downside?  Thanks,
>>
> 
> btw I guess this can be also fixed by Yi's work here:
> 
> https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> 
> with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> such circular lock dependency can be avoided too.

Oh, interesting...  It seems to me that this would eliminate the reported call chain altogether:

kvm_put_kvm
 -> kvm_destroy_vm
  -> kvm_destroy_devices
   -> kvm_vfio_destroy (starting here -- this would no longer be executed)
    -> kvm_vfio_file_set_kvm
     -> vfio_file_set_kvm
      -> group->group_lock/group_rwsem

because kvm_destroy_devices now can't end up calling kvm_vfio_destroy and friends, it won't try and acquire the group lock a 2nd time making a kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm call will now always come from a separate thread of execution, kvm_vfio_group_add, kvm_vfio_group_del or the release thread:

kvm_device_release (where the group->group_lock would not be held since vfio does not trigger closing of the kvm fd)
 -> kvm_vfio_destroy (or, kvm_vfio_release)
  -> kvm_vfio_file_set_kvm
   -> vfio_file_set_kvm
    -> group->group_lock/group_rwsem

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-18 14:55       ` Matthew Rosato
  0 siblings, 0 replies; 20+ messages in thread
From: Matthew Rosato @ 2023-01-18 14:55 UTC (permalink / raw)
  To: Tian, Kevin, Alex Williamson
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	Christopherson, ,
	Sean, intel-gfx, cohuck, linux-kernel, zhenyuw, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, Liu, Yi L, intel-gvt-dev,
	Wang, Zhi A

On 1/18/23 4:03 AM, Tian, Kevin wrote:
>> From: Alex Williamson
>> Sent: Wednesday, January 18, 2023 5:23 AM
>>
>> On Fri, 13 Jan 2023 19:03:51 -0500
>> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
>>
>>>  void vfio_device_group_close(struct vfio_device *device)
>>>  {
>>> +	void (*put_kvm)(struct kvm *kvm);
>>> +	struct kvm *kvm;
>>> +
>>>  	mutex_lock(&device->group->group_lock);
>>> +	kvm = device->kvm;
>>> +	put_kvm = device->put_kvm;
>>>  	vfio_device_close(device, device->group->iommufd);
>>> +	if (kvm == device->kvm)
>>> +		kvm = NULL;
>>
>> Hmm, so we're using whether the device->kvm pointer gets cleared in
>> last_close to detect whether we should put the kvm reference.  That's a
>> bit obscure.  Our get and put is also asymmetric.
>>
>> Did we decide that we couldn't do this via a schedule_work() from the
>> last_close function, ie. implementing our own version of an async put?
>> It seems like that potentially has a cleaner implementation, symmetric
>> call points, handling all the storing and clearing of kvm related
>> pointers within the get/put wrappers, passing only a vfio_device to the
>> put wrapper, using the "vfio_device_" prefix for both.  Potentially
>> we'd just want an unconditional flush outside of lock here for
>> deterministic release.
>>
>> What's the downside?  Thanks,
>>
> 
> btw I guess this can be also fixed by Yi's work here:
> 
> https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> 
> with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> such circular lock dependency can be avoided too.

Oh, interesting...  It seems to me that this would eliminate the reported call chain altogether:

kvm_put_kvm
 -> kvm_destroy_vm
  -> kvm_destroy_devices
   -> kvm_vfio_destroy (starting here -- this would no longer be executed)
    -> kvm_vfio_file_set_kvm
     -> vfio_file_set_kvm
      -> group->group_lock/group_rwsem

because kvm_destroy_devices now can't end up calling kvm_vfio_destroy and friends, it won't try and acquire the group lock a 2nd time making a kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm call will now always come from a separate thread of execution, kvm_vfio_group_add, kvm_vfio_group_del or the release thread:

kvm_device_release (where the group->group_lock would not be held since vfio does not trigger closing of the kvm fd)
 -> kvm_vfio_destroy (or, kvm_vfio_release)
  -> kvm_vfio_file_set_kvm
   -> vfio_file_set_kvm
    -> group->group_lock/group_rwsem

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-18 14:55       ` Matthew Rosato
@ 2023-01-19  3:43         ` Tian, Kevin
  -1 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-19  3:43 UTC (permalink / raw)
  To: Matthew Rosato, Alex Williamson
  Cc: akrowiak, jjherne, farman, borntraeger, frankja, pmorel, david,
	Christopherson, ,
	Sean, intel-gfx, cohuck, linux-kernel, pasic, Liu, Yi L, jgg,
	kvm, pbonzini, linux-s390, imbrenda, intel-gvt-dev

> From: Matthew Rosato <mjrosato@linux.ibm.com>
> Sent: Wednesday, January 18, 2023 10:56 PM
> 
> On 1/18/23 4:03 AM, Tian, Kevin wrote:
> >> From: Alex Williamson
> >> Sent: Wednesday, January 18, 2023 5:23 AM
> >>
> >> On Fri, 13 Jan 2023 19:03:51 -0500
> >> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> >>
> >>>  void vfio_device_group_close(struct vfio_device *device)
> >>>  {
> >>> +	void (*put_kvm)(struct kvm *kvm);
> >>> +	struct kvm *kvm;
> >>> +
> >>>  	mutex_lock(&device->group->group_lock);
> >>> +	kvm = device->kvm;
> >>> +	put_kvm = device->put_kvm;
> >>>  	vfio_device_close(device, device->group->iommufd);
> >>> +	if (kvm == device->kvm)
> >>> +		kvm = NULL;
> >>
> >> Hmm, so we're using whether the device->kvm pointer gets cleared in
> >> last_close to detect whether we should put the kvm reference.  That's a
> >> bit obscure.  Our get and put is also asymmetric.
> >>
> >> Did we decide that we couldn't do this via a schedule_work() from the
> >> last_close function, ie. implementing our own version of an async put?
> >> It seems like that potentially has a cleaner implementation, symmetric
> >> call points, handling all the storing and clearing of kvm related
> >> pointers within the get/put wrappers, passing only a vfio_device to the
> >> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> >> we'd just want an unconditional flush outside of lock here for
> >> deterministic release.
> >>
> >> What's the downside?  Thanks,
> >>
> >
> > btw I guess this can be also fixed by Yi's work here:
> >
> > https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> >
> > with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> > such circular lock dependency can be avoided too.
> 
> Oh, interesting...  It seems to me that this would eliminate the reported call
> chain altogether:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy (starting here -- this would no longer be executed)
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem
> 
> because kvm_destroy_devices now can't end up calling kvm_vfio_destroy
> and friends, it won't try and acquire the group lock a 2nd time making a
> kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm
> call will now always come from a separate thread of execution,
> kvm_vfio_group_add, kvm_vfio_group_del or the release thread:
> 
> kvm_device_release (where the group->group_lock would not be held since
> vfio does not trigger closing of the kvm fd)
>  -> kvm_vfio_destroy (or, kvm_vfio_release)
>   -> kvm_vfio_file_set_kvm
>    -> vfio_file_set_kvm
>     -> group->group_lock/group_rwsem

Yes, that's my point. If Alex/Jason are also OK with it probably Yi can
send that patch separately as a fix to this issue. It's much simpler. 😊

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-19  3:43         ` Tian, Kevin
  0 siblings, 0 replies; 20+ messages in thread
From: Tian, Kevin @ 2023-01-19  3:43 UTC (permalink / raw)
  To: Matthew Rosato, Alex Williamson
  Cc: akrowiak, jjherne, farman, imbrenda, frankja, pmorel, david,
	Christopherson,,
	Sean, intel-gfx, cohuck, linux-kernel, zhenyuw, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, Liu, Yi L, intel-gvt-dev,
	Wang, Zhi A

> From: Matthew Rosato <mjrosato@linux.ibm.com>
> Sent: Wednesday, January 18, 2023 10:56 PM
> 
> On 1/18/23 4:03 AM, Tian, Kevin wrote:
> >> From: Alex Williamson
> >> Sent: Wednesday, January 18, 2023 5:23 AM
> >>
> >> On Fri, 13 Jan 2023 19:03:51 -0500
> >> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> >>
> >>>  void vfio_device_group_close(struct vfio_device *device)
> >>>  {
> >>> +	void (*put_kvm)(struct kvm *kvm);
> >>> +	struct kvm *kvm;
> >>> +
> >>>  	mutex_lock(&device->group->group_lock);
> >>> +	kvm = device->kvm;
> >>> +	put_kvm = device->put_kvm;
> >>>  	vfio_device_close(device, device->group->iommufd);
> >>> +	if (kvm == device->kvm)
> >>> +		kvm = NULL;
> >>
> >> Hmm, so we're using whether the device->kvm pointer gets cleared in
> >> last_close to detect whether we should put the kvm reference.  That's a
> >> bit obscure.  Our get and put is also asymmetric.
> >>
> >> Did we decide that we couldn't do this via a schedule_work() from the
> >> last_close function, ie. implementing our own version of an async put?
> >> It seems like that potentially has a cleaner implementation, symmetric
> >> call points, handling all the storing and clearing of kvm related
> >> pointers within the get/put wrappers, passing only a vfio_device to the
> >> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> >> we'd just want an unconditional flush outside of lock here for
> >> deterministic release.
> >>
> >> What's the downside?  Thanks,
> >>
> >
> > btw I guess this can be also fixed by Yi's work here:
> >
> > https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> >
> > with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> > such circular lock dependency can be avoided too.
> 
> Oh, interesting...  It seems to me that this would eliminate the reported call
> chain altogether:
> 
> kvm_put_kvm
>  -> kvm_destroy_vm
>   -> kvm_destroy_devices
>    -> kvm_vfio_destroy (starting here -- this would no longer be executed)
>     -> kvm_vfio_file_set_kvm
>      -> vfio_file_set_kvm
>       -> group->group_lock/group_rwsem
> 
> because kvm_destroy_devices now can't end up calling kvm_vfio_destroy
> and friends, it won't try and acquire the group lock a 2nd time making a
> kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm
> call will now always come from a separate thread of execution,
> kvm_vfio_group_add, kvm_vfio_group_del or the release thread:
> 
> kvm_device_release (where the group->group_lock would not be held since
> vfio does not trigger closing of the kvm fd)
>  -> kvm_vfio_destroy (or, kvm_vfio_release)
>   -> kvm_vfio_file_set_kvm
>    -> vfio_file_set_kvm
>     -> group->group_lock/group_rwsem

Yes, that's my point. If Alex/Jason are also OK with it probably Yi can
send that patch separately as a fix to this issue. It's much simpler. 😊

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock
  2023-01-19  3:43         ` Tian, Kevin
@ 2023-01-19 19:05           ` Alex Williamson
  -1 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2023-01-19 19:05 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Matthew Rosato, david, imbrenda, linux-s390, Liu, Yi L, frankja,
	pasic, jgg, borntraeger, jjherne, farman, intel-gfx, kvm,
	intel-gvt-dev, akrowiak, pmorel, Christopherson, ,
	Sean, cohuck, linux-kernel, pbonzini

On Thu, 19 Jan 2023 03:43:36 +0000
"Tian, Kevin" <kevin.tian@intel.com> wrote:

> > From: Matthew Rosato <mjrosato@linux.ibm.com>
> > Sent: Wednesday, January 18, 2023 10:56 PM
> > 
> > On 1/18/23 4:03 AM, Tian, Kevin wrote:  
> > >> From: Alex Williamson
> > >> Sent: Wednesday, January 18, 2023 5:23 AM
> > >>
> > >> On Fri, 13 Jan 2023 19:03:51 -0500
> > >> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> > >>  
> > >>>  void vfio_device_group_close(struct vfio_device *device)
> > >>>  {
> > >>> +	void (*put_kvm)(struct kvm *kvm);
> > >>> +	struct kvm *kvm;
> > >>> +
> > >>>  	mutex_lock(&device->group->group_lock);
> > >>> +	kvm = device->kvm;
> > >>> +	put_kvm = device->put_kvm;
> > >>>  	vfio_device_close(device, device->group->iommufd);
> > >>> +	if (kvm == device->kvm)
> > >>> +		kvm = NULL;  
> > >>
> > >> Hmm, so we're using whether the device->kvm pointer gets cleared in
> > >> last_close to detect whether we should put the kvm reference.  That's a
> > >> bit obscure.  Our get and put is also asymmetric.
> > >>
> > >> Did we decide that we couldn't do this via a schedule_work() from the
> > >> last_close function, ie. implementing our own version of an async put?
> > >> It seems like that potentially has a cleaner implementation, symmetric
> > >> call points, handling all the storing and clearing of kvm related
> > >> pointers within the get/put wrappers, passing only a vfio_device to the
> > >> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> > >> we'd just want an unconditional flush outside of lock here for
> > >> deterministic release.
> > >>
> > >> What's the downside?  Thanks,
> > >>  
> > >
> > > btw I guess this can be also fixed by Yi's work here:
> > >
> > > https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> > >
> > > with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> > > such circular lock dependency can be avoided too.  
> > 
> > Oh, interesting...  It seems to me that this would eliminate the reported call
> > chain altogether:
> > 
> > kvm_put_kvm  
> >  -> kvm_destroy_vm
> >   -> kvm_destroy_devices
> >    -> kvm_vfio_destroy (starting here -- this would no longer be executed)
> >     -> kvm_vfio_file_set_kvm
> >      -> vfio_file_set_kvm
> >       -> group->group_lock/group_rwsem  
> > 
> > because kvm_destroy_devices now can't end up calling kvm_vfio_destroy
> > and friends, it won't try and acquire the group lock a 2nd time making a
> > kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm
> > call will now always come from a separate thread of execution,
> > kvm_vfio_group_add, kvm_vfio_group_del or the release thread:
> > 
> > kvm_device_release (where the group->group_lock would not be held since
> > vfio does not trigger closing of the kvm fd)  
> >  -> kvm_vfio_destroy (or, kvm_vfio_release)
> >   -> kvm_vfio_file_set_kvm
> >    -> vfio_file_set_kvm
> >     -> group->group_lock/group_rwsem  
> 
> Yes, that's my point. If Alex/Jason are also OK with it probably Yi can
> send that patch separately as a fix to this issue. It's much simpler. 😊

If we can extract that flow separate from the cdev refactoring, ideally
something that matches the stable kernel backport rules, then that
sounds like the preferred solution.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
@ 2023-01-19 19:05           ` Alex Williamson
  0 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2023-01-19 19:05 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Matthew Rosato, akrowiak, jjherne, farman, imbrenda, frankja,
	pmorel, david, Christopherson,,
	Sean, intel-gfx, cohuck, linux-kernel, zhenyuw, pasic, jgg, kvm,
	pbonzini, linux-s390, borntraeger, Liu, Yi L, intel-gvt-dev,
	Wang, Zhi A

On Thu, 19 Jan 2023 03:43:36 +0000
"Tian, Kevin" <kevin.tian@intel.com> wrote:

> > From: Matthew Rosato <mjrosato@linux.ibm.com>
> > Sent: Wednesday, January 18, 2023 10:56 PM
> > 
> > On 1/18/23 4:03 AM, Tian, Kevin wrote:  
> > >> From: Alex Williamson
> > >> Sent: Wednesday, January 18, 2023 5:23 AM
> > >>
> > >> On Fri, 13 Jan 2023 19:03:51 -0500
> > >> Matthew Rosato <mjrosato@linux.ibm.com> wrote:
> > >>  
> > >>>  void vfio_device_group_close(struct vfio_device *device)
> > >>>  {
> > >>> +	void (*put_kvm)(struct kvm *kvm);
> > >>> +	struct kvm *kvm;
> > >>> +
> > >>>  	mutex_lock(&device->group->group_lock);
> > >>> +	kvm = device->kvm;
> > >>> +	put_kvm = device->put_kvm;
> > >>>  	vfio_device_close(device, device->group->iommufd);
> > >>> +	if (kvm == device->kvm)
> > >>> +		kvm = NULL;  
> > >>
> > >> Hmm, so we're using whether the device->kvm pointer gets cleared in
> > >> last_close to detect whether we should put the kvm reference.  That's a
> > >> bit obscure.  Our get and put is also asymmetric.
> > >>
> > >> Did we decide that we couldn't do this via a schedule_work() from the
> > >> last_close function, ie. implementing our own version of an async put?
> > >> It seems like that potentially has a cleaner implementation, symmetric
> > >> call points, handling all the storing and clearing of kvm related
> > >> pointers within the get/put wrappers, passing only a vfio_device to the
> > >> put wrapper, using the "vfio_device_" prefix for both.  Potentially
> > >> we'd just want an unconditional flush outside of lock here for
> > >> deterministic release.
> > >>
> > >> What's the downside?  Thanks,
> > >>  
> > >
> > > btw I guess this can be also fixed by Yi's work here:
> > >
> > > https://lore.kernel.org/kvm/20230117134942.101112-6-yi.l.liu@intel.com/
> > >
> > > with set_kvm(NULL) moved to the release callback of kvm_vfio device,
> > > such circular lock dependency can be avoided too.  
> > 
> > Oh, interesting...  It seems to me that this would eliminate the reported call
> > chain altogether:
> > 
> > kvm_put_kvm  
> >  -> kvm_destroy_vm
> >   -> kvm_destroy_devices
> >    -> kvm_vfio_destroy (starting here -- this would no longer be executed)
> >     -> kvm_vfio_file_set_kvm
> >      -> vfio_file_set_kvm
> >       -> group->group_lock/group_rwsem  
> > 
> > because kvm_destroy_devices now can't end up calling kvm_vfio_destroy
> > and friends, it won't try and acquire the group lock a 2nd time making a
> > kvm_put_kvm while the group lock is held OK to do.  The vfio_file_set_kvm
> > call will now always come from a separate thread of execution,
> > kvm_vfio_group_add, kvm_vfio_group_del or the release thread:
> > 
> > kvm_device_release (where the group->group_lock would not be held since
> > vfio does not trigger closing of the kvm fd)  
> >  -> kvm_vfio_destroy (or, kvm_vfio_release)
> >   -> kvm_vfio_file_set_kvm
> >    -> vfio_file_set_kvm
> >     -> group->group_lock/group_rwsem  
> 
> Yes, that's my point. If Alex/Jason are also OK with it probably Yi can
> send that patch separately as a fix to this issue. It's much simpler. 😊

If we can extract that flow separate from the cdev refactoring, ideally
something that matches the stable kernel backport rules, then that
sounds like the preferred solution.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2023-01-19 19:07 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-14  0:03 [PATCH v4] vfio: fix potential deadlock on vfio group lock Matthew Rosato
2023-01-14  0:03 ` [Intel-gfx] " Matthew Rosato
2023-01-14  1:12 ` [Intel-gfx] ✓ Fi.CI.BAT: success for vfio: fix potential deadlock on vfio group lock (rev3) Patchwork
2023-01-14  8:37 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2023-01-16 15:03 ` [Intel-gfx] [PATCH v4] vfio: fix potential deadlock on vfio group lock Jason Gunthorpe
2023-01-16 15:03   ` Jason Gunthorpe
2023-01-17  9:05 ` [Intel-gfx] " Tian, Kevin
2023-01-17  9:05   ` Tian, Kevin
2023-01-17 21:22 ` [Intel-gfx] " Alex Williamson
2023-01-17 21:22   ` Alex Williamson
2023-01-18  9:03   ` [Intel-gfx] " Tian, Kevin
2023-01-18  9:03     ` Tian, Kevin
2023-01-18 14:55     ` [Intel-gfx] " Matthew Rosato
2023-01-18 14:55       ` Matthew Rosato
2023-01-19  3:43       ` [Intel-gfx] " Tian, Kevin
2023-01-19  3:43         ` Tian, Kevin
2023-01-19 19:05         ` [Intel-gfx] " Alex Williamson
2023-01-19 19:05           ` Alex Williamson
2023-01-18 14:15   ` [Intel-gfx] " Matthew Rosato
2023-01-18 14:15     ` Matthew Rosato

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.