All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-12 15:53 ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

There is a conflict with Christoph's gvt rework here:

 https://lore.kernel.org/all/20220411141403.86980-1-hch@lst.de/

I've organized this so it is independent of Christoph's series, by adding
the temporary mdev_legacy_get_vfio_device(), however it is easy for me to
rebase. We can decide what to do as we see what becomes mergable. My
preference would be to see Christoph's series merged into the drm&vfio
trees and we do both series this cycle.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (9):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Add missing module_put() in error unwind
  drm/i915/gvt: Delete kvmgt_vdev::vfio_group
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  48 ++-
 drivers/s390/cio/vfio_ccw_cp.c                |  44 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  22 +-
 drivers/vfio/mdev/vfio_mdev.c                 |  12 +
 drivers/vfio/vfio.c                           | 283 ++----------------
 include/linux/mdev.h                          |   1 +
 include/linux/vfio.h                          |  21 +-
 11 files changed, 115 insertions(+), 334 deletions(-)


base-commit: ce522ba9ef7e2d9fb22a39eb3371c0c64e2a433e
-- 
2.35.1


^ permalink raw reply	[flat|nested] 141+ messages in thread

* [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-12 15:53 ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

There is a conflict with Christoph's gvt rework here:

 https://lore.kernel.org/all/20220411141403.86980-1-hch@lst.de/

I've organized this so it is independent of Christoph's series, by adding
the temporary mdev_legacy_get_vfio_device(), however it is easy for me to
rebase. We can decide what to do as we see what becomes mergable. My
preference would be to see Christoph's series merged into the drm&vfio
trees and we do both series this cycle.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (9):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Add missing module_put() in error unwind
  drm/i915/gvt: Delete kvmgt_vdev::vfio_group
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  48 ++-
 drivers/s390/cio/vfio_ccw_cp.c                |  44 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  22 +-
 drivers/vfio/mdev/vfio_mdev.c                 |  12 +
 drivers/vfio/vfio.c                           | 283 ++----------------
 include/linux/mdev.h                          |   1 +
 include/linux/vfio.h                          |  21 +-
 11 files changed, 115 insertions(+), 334 deletions(-)


base-commit: ce522ba9ef7e2d9fb22a39eb3371c0c64e2a433e
-- 
2.35.1


^ permalink raw reply	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-12 15:53 ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

There is a conflict with Christoph's gvt rework here:

 https://lore.kernel.org/all/20220411141403.86980-1-hch@lst.de/

I've organized this so it is independent of Christoph's series, by adding
the temporary mdev_legacy_get_vfio_device(), however it is easy for me to
rebase. We can decide what to do as we see what becomes mergable. My
preference would be to see Christoph's series merged into the drm&vfio
trees and we do both series this cycle.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (9):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Add missing module_put() in error unwind
  drm/i915/gvt: Delete kvmgt_vdev::vfio_group
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  48 ++-
 drivers/s390/cio/vfio_ccw_cp.c                |  44 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  22 +-
 drivers/vfio/mdev/vfio_mdev.c                 |  12 +
 drivers/vfio/vfio.c                           | 283 ++----------------
 include/linux/mdev.h                          |   1 +
 include/linux/vfio.h                          |  21 +-
 11 files changed, 115 insertions(+), 334 deletions(-)


base-commit: ce522ba9ef7e2d9fb22a39eb3371c0c64e2a433e
-- 
2.35.1


^ permalink raw reply	[flat|nested] 141+ messages in thread

* [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/mdev.h              |  1 +
 include/linux/vfio.h              |  4 ++--
 7 files changed, 41 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 057ec449010458..bb59d21cf898ab 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
 
 static int intel_vgpu_open_device(struct mdev_device *mdev)
 {
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
@@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
 				&vdev->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
@@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
 				&vdev->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
@@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 out:
 	return ret;
@@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_guest_info *info;
+	struct vfio_device *vfio_dev;
 	int ret;
 
 	if (!handle_valid(vgpu->handle))
@@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_ops->vgpu_release(vgpu);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
+	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index d8589afac272f1..e1ce24d8fb2555 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 6e08d04b605d6e..69768061cd7bd9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index a90e24b0c851d3..91605c1e8c8f94 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -17,6 +17,18 @@
 
 #include "mdev_private.h"
 
+/*
+ * Return the struct vfio_device for the mdev when using the legacy
+ * vfio_mdev_dev_ops path. No new callers to this function should be added.
+ */
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
+{
+	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
+		return NULL;
+	return dev_get_drvdata(&mdev->dev);
+}
+EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
+
 static int vfio_mdev_open_device(struct vfio_device *core_vdev)
 {
 	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index 15d03f6532d073..67d07220a28f29 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
 unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
 unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
 struct device *mtype_get_parent_dev(struct mdev_type *mtype);
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
 
 /**
  * struct mdev_parent_ops - Structure to be registered for each parent device to
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/mdev.h              |  1 +
 include/linux/vfio.h              |  4 ++--
 7 files changed, 41 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 057ec449010458..bb59d21cf898ab 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
 
 static int intel_vgpu_open_device(struct mdev_device *mdev)
 {
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
@@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
 				&vdev->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
@@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
 				&vdev->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
@@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 out:
 	return ret;
@@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_guest_info *info;
+	struct vfio_device *vfio_dev;
 	int ret;
 
 	if (!handle_valid(vgpu->handle))
@@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_ops->vgpu_release(vgpu);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
+	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index d8589afac272f1..e1ce24d8fb2555 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 6e08d04b605d6e..69768061cd7bd9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index a90e24b0c851d3..91605c1e8c8f94 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -17,6 +17,18 @@
 
 #include "mdev_private.h"
 
+/*
+ * Return the struct vfio_device for the mdev when using the legacy
+ * vfio_mdev_dev_ops path. No new callers to this function should be added.
+ */
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
+{
+	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
+		return NULL;
+	return dev_get_drvdata(&mdev->dev);
+}
+EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
+
 static int vfio_mdev_open_device(struct vfio_device *core_vdev)
 {
 	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index 15d03f6532d073..67d07220a28f29 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
 unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
 unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
 struct device *mtype_get_parent_dev(struct mdev_type *mtype);
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
 
 /**
  * struct mdev_parent_ops - Structure to be registered for each parent device to
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/mdev.h              |  1 +
 include/linux/vfio.h              |  4 ++--
 7 files changed, 41 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 057ec449010458..bb59d21cf898ab 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
 
 static int intel_vgpu_open_device(struct mdev_device *mdev)
 {
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
@@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
 				&vdev->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
@@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
 				&vdev->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
@@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	vdev->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 out:
 	return ret;
@@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_guest_info *info;
+	struct vfio_device *vfio_dev;
 	int ret;
 
 	if (!handle_valid(vgpu->handle))
@@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_ops->vgpu_release(vgpu);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
+	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index d8589afac272f1..e1ce24d8fb2555 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 6e08d04b605d6e..69768061cd7bd9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index a90e24b0c851d3..91605c1e8c8f94 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -17,6 +17,18 @@
 
 #include "mdev_private.h"
 
+/*
+ * Return the struct vfio_device for the mdev when using the legacy
+ * vfio_mdev_dev_ops path. No new callers to this function should be added.
+ */
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
+{
+	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
+		return NULL;
+	return dev_get_drvdata(&mdev->dev);
+}
+EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
+
 static int vfio_mdev_open_device(struct vfio_device *core_vdev)
 {
 	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index 15d03f6532d073..67d07220a28f29 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
 unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
 unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
 struct device *mtype_get_parent_dev(struct mdev_type *mtype);
+struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
 
 /**
  * struct mdev_parent_ops - Structure to be registered for each parent device to
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++--------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..af5048a1ba8894 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
+static long copy_from_iova(struct vfio_device *vdev,
 			   void *to, u64 iova,
 			   unsigned long n)
 {
@@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(vdev->dev, "Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++--------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..af5048a1ba8894 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
+static long copy_from_iova(struct vfio_device *vdev,
 			   void *to, u64 iova,
 			   unsigned long n)
 {
@@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(vdev->dev, "Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++--------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..af5048a1ba8894 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
+static long copy_from_iova(struct vfio_device *vdev,
 			   void *to, u64 iova,
 			   unsigned long n)
 {
@@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(vdev->dev, "Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 38 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 9f26079cacae35..6aeca741dc9be1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index af5048a1ba8894..e362cb962a7234 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 69768061cd7bd9..a10b3369d76c41 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
+		vfio_unpin_pages(&q->matrix_mdev->vdev,
 				 &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
@@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 38 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 9f26079cacae35..6aeca741dc9be1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index af5048a1ba8894..e362cb962a7234 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 69768061cd7bd9..a10b3369d76c41 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
+		vfio_unpin_pages(&q->matrix_mdev->vdev,
 				 &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
@@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 38 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 9f26079cacae35..6aeca741dc9be1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index af5048a1ba8894..e362cb962a7234 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index 69768061cd7bd9..a10b3369d76c41 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
+		vfio_unpin_pages(&q->matrix_mdev->vdev,
 				 &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
@@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index bb59d21cf898ab..df7d87409e3a9c 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -268,6 +268,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 {
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	int total_pages;
 	int npage;
 	int ret;
@@ -277,7 +278,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vdev->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(vfio_dev, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -287,6 +288,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long size, struct page **page)
 {
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	unsigned long base_pfn = 0;
 	int total_pages;
 	int npage;
@@ -301,8 +303,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vdev->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(vfio_dev, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index bb59d21cf898ab..df7d87409e3a9c 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -268,6 +268,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 {
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	int total_pages;
 	int npage;
 	int ret;
@@ -277,7 +278,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vdev->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(vfio_dev, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -287,6 +288,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long size, struct page **page)
 {
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	unsigned long base_pfn = 0;
 	int total_pages;
 	int npage;
@@ -301,8 +303,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vdev->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(vfio_dev, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index bb59d21cf898ab..df7d87409e3a9c 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -268,6 +268,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 {
 	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	int total_pages;
 	int npage;
 	int ret;
@@ -277,7 +278,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vdev->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(vfio_dev, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -287,6 +288,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long size, struct page **page)
 {
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
 	unsigned long base_pfn = 0;
 	int total_pages;
 	int npage;
@@ -301,8 +303,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vdev->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(vfio_dev, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c |  5 +++--
 drivers/vfio/vfio.c              | 22 ++++++++++------------
 include/linux/vfio.h             |  2 +-
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index df7d87409e3a9c..3302d5d4d92146 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -2184,8 +2184,9 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
 
 	info = (struct kvmgt_guest_info *)handle;
 
-	return vfio_dma_rw(kvmgt_vdev(info->vgpu)->vfio_group,
-			   gpa, buf, len, write);
+	return vfio_dma_rw(
+		mdev_legacy_get_vfio_device(kvmgt_vdev(info->vgpu)->mdev),
+		gpa, buf, len, write);
 }
 
 static int kvmgt_read_gpa(unsigned long handle, unsigned long gpa,
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..e6e102e017623b 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		void *data, size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c |  5 +++--
 drivers/vfio/vfio.c              | 22 ++++++++++------------
 include/linux/vfio.h             |  2 +-
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index df7d87409e3a9c..3302d5d4d92146 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -2184,8 +2184,9 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
 
 	info = (struct kvmgt_guest_info *)handle;
 
-	return vfio_dma_rw(kvmgt_vdev(info->vgpu)->vfio_group,
-			   gpa, buf, len, write);
+	return vfio_dma_rw(
+		mdev_legacy_get_vfio_device(kvmgt_vdev(info->vgpu)->mdev),
+		gpa, buf, len, write);
 }
 
 static int kvmgt_read_gpa(unsigned long handle, unsigned long gpa,
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..e6e102e017623b 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		void *data, size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c |  5 +++--
 drivers/vfio/vfio.c              | 22 ++++++++++------------
 include/linux/vfio.h             |  2 +-
 3 files changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index df7d87409e3a9c..3302d5d4d92146 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -2184,8 +2184,9 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
 
 	info = (struct kvmgt_guest_info *)handle;
 
-	return vfio_dma_rw(kvmgt_vdev(info->vgpu)->vfio_group,
-			   gpa, buf, len, write);
+	return vfio_dma_rw(
+		mdev_legacy_get_vfio_device(kvmgt_vdev(info->vgpu)->mdev),
+		gpa, buf, len, write);
 }
 
 static int kvmgt_read_gpa(unsigned long handle, unsigned long gpa,
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..e6e102e017623b 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		void *data, size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

try_module_get() must be undone if kvmgt_guest_init() fails or we leak the
module reference count on the failure path since the close_device op is
never called in this case.

Fixes: 9bdb073464d6 ("drm/i915/gvt: Change KVMGT as self load module")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 3302d5d4d92146..d7c22a2601f3ad 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -952,13 +952,16 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 
 	ret = kvmgt_guest_init(mdev);
 	if (ret)
-		goto undo_group;
+		goto undo_module_get;
 
 	intel_gvt_ops->vgpu_activate(vgpu);
 
 	atomic_set(&vdev->released, 0);
 	return ret;
 
+undo_module_get:
+	module_put(THIS_MODULE);
+
 undo_group:
 	vfio_group_put_external_user(vdev->vfio_group);
 	vdev->vfio_group = NULL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

try_module_get() must be undone if kvmgt_guest_init() fails or we leak the
module reference count on the failure path since the close_device op is
never called in this case.

Fixes: 9bdb073464d6 ("drm/i915/gvt: Change KVMGT as self load module")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 3302d5d4d92146..d7c22a2601f3ad 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -952,13 +952,16 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 
 	ret = kvmgt_guest_init(mdev);
 	if (ret)
-		goto undo_group;
+		goto undo_module_get;
 
 	intel_gvt_ops->vgpu_activate(vgpu);
 
 	atomic_set(&vdev->released, 0);
 	return ret;
 
+undo_module_get:
+	module_put(THIS_MODULE);
+
 undo_group:
 	vfio_group_put_external_user(vdev->vfio_group);
 	vdev->vfio_group = NULL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

try_module_get() must be undone if kvmgt_guest_init() fails or we leak the
module reference count on the failure path since the close_device op is
never called in this case.

Fixes: 9bdb073464d6 ("drm/i915/gvt: Change KVMGT as self load module")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 3302d5d4d92146..d7c22a2601f3ad 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -952,13 +952,16 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 
 	ret = kvmgt_guest_init(mdev);
 	if (ret)
-		goto undo_group;
+		goto undo_module_get;
 
 	intel_gvt_ops->vgpu_activate(vgpu);
 
 	atomic_set(&vdev->released, 0);
 	return ret;
 
+undo_module_get:
+	module_put(THIS_MODULE);
+
 undo_group:
 	vfio_group_put_external_user(vdev->vfio_group);
 	vdev->vfio_group = NULL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 7/9] drm/i915/gvt: Delete kvmgt_vdev::vfio_group
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Nothing references this struct member any more, delete it completely.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index d7c22a2601f3ad..b15dbe9ecd7e15 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -133,7 +133,6 @@ struct kvmgt_vdev {
 	struct work_struct release_work;
 	atomic_t released;
 	struct vfio_device *vfio_device;
-	struct vfio_group *vfio_group;
 };
 
 static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
@@ -911,7 +910,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -934,20 +932,12 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 		goto undo_iommu;
 	}
 
-	vfio_group = vfio_group_get_external_user_from_dev(mdev_dev(mdev));
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vdev->vfio_group = vfio_group;
-
 	/* Take a module reference as mdev core doesn't take
 	 * a reference for vendor driver.
 	 */
 	if (!try_module_get(THIS_MODULE)) {
 		ret = -ENODEV;
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = kvmgt_guest_init(mdev);
@@ -962,10 +952,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 undo_module_get:
 	module_put(THIS_MODULE);
 
-undo_group:
-	vfio_group_put_external_user(vdev->vfio_group);
-	vdev->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
@@ -1023,7 +1009,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	kvmgt_guest_exit(info);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vdev->vfio_group);
 
 	vdev->kvm = NULL;
 	vgpu->handle = 0;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 7/9] drm/i915/gvt: Delete kvmgt_vdev::vfio_group
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Nothing references this struct member any more, delete it completely.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index d7c22a2601f3ad..b15dbe9ecd7e15 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -133,7 +133,6 @@ struct kvmgt_vdev {
 	struct work_struct release_work;
 	atomic_t released;
 	struct vfio_device *vfio_device;
-	struct vfio_group *vfio_group;
 };
 
 static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
@@ -911,7 +910,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -934,20 +932,12 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 		goto undo_iommu;
 	}
 
-	vfio_group = vfio_group_get_external_user_from_dev(mdev_dev(mdev));
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vdev->vfio_group = vfio_group;
-
 	/* Take a module reference as mdev core doesn't take
 	 * a reference for vendor driver.
 	 */
 	if (!try_module_get(THIS_MODULE)) {
 		ret = -ENODEV;
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = kvmgt_guest_init(mdev);
@@ -962,10 +952,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 undo_module_get:
 	module_put(THIS_MODULE);
 
-undo_group:
-	vfio_group_put_external_user(vdev->vfio_group);
-	vdev->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
@@ -1023,7 +1009,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	kvmgt_guest_exit(info);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vdev->vfio_group);
 
 	vdev->kvm = NULL;
 	vgpu->handle = 0;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 7/9] drm/i915/gvt: Delete kvmgt_vdev::vfio_group
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Nothing references this struct member any more, delete it completely.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index d7c22a2601f3ad..b15dbe9ecd7e15 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -133,7 +133,6 @@ struct kvmgt_vdev {
 	struct work_struct release_work;
 	atomic_t released;
 	struct vfio_device *vfio_device;
-	struct vfio_group *vfio_group;
 };
 
 static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
@@ -911,7 +910,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -934,20 +932,12 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 		goto undo_iommu;
 	}
 
-	vfio_group = vfio_group_get_external_user_from_dev(mdev_dev(mdev));
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vdev->vfio_group = vfio_group;
-
 	/* Take a module reference as mdev core doesn't take
 	 * a reference for vendor driver.
 	 */
 	if (!try_module_get(THIS_MODULE)) {
 		ret = -ENODEV;
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = kvmgt_guest_init(mdev);
@@ -962,10 +952,6 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
 undo_module_get:
 	module_put(THIS_MODULE);
 
-undo_group:
-	vfio_group_put_external_user(vdev->vfio_group);
-	vdev->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);
@@ -1023,7 +1009,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	kvmgt_guest_exit(info);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vdev->vfio_group);
 
 	vdev->kvm = NULL;
 	vgpu->handle = 0;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 8/9] vfio: Remove dead code
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index e6e102e017623b..3d75505bf3cc26 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 8/9] vfio: Remove dead code
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index e6e102e017623b..3d75505bf3cc26 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 8/9] vfio: Remove dead code
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index e6e102e017623b..3d75505bf3cc26 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-12 15:53 ` Jason Gunthorpe
  (?)
@ 2022-04-12 15:53   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between
open_device()/close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

So eliminate the calls to vfio_group_add_container_user() and add a simple
WARN_ON to detect mis-use by drivers.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 67 +++++++++------------------------------------
 1 file changed, 13 insertions(+), 54 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 3d75505bf3cc26..ab0c3f5635905c 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2121,9 +2121,8 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2134,8 +2133,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2162,9 +2159,8 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2174,8 +2170,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2207,9 +2201,8 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	if (!data || len <= 0)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2219,9 +2212,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2224,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2231,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2241,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2248,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2276,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2285,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2344,6 +2297,9 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2368,12 +2324,15 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	if (!nb)
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between
open_device()/close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

So eliminate the calls to vfio_group_add_container_user() and add a simple
WARN_ON to detect mis-use by drivers.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 67 +++++++++------------------------------------
 1 file changed, 13 insertions(+), 54 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 3d75505bf3cc26..ab0c3f5635905c 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2121,9 +2121,8 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2134,8 +2133,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2162,9 +2159,8 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2174,8 +2170,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2207,9 +2201,8 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	if (!data || len <= 0)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2219,9 +2212,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2224,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2231,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2241,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2248,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2276,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2285,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2344,6 +2297,9 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2368,12 +2324,15 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	if (!nb)
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-12 15:53   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-12 15:53 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between
open_device()/close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

So eliminate the calls to vfio_group_add_container_user() and add a simple
WARN_ON to detect mis-use by drivers.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 67 +++++++++------------------------------------
 1 file changed, 13 insertions(+), 54 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 3d75505bf3cc26..ab0c3f5635905c 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2121,9 +2121,8 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2134,8 +2133,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2162,9 +2159,8 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2174,8 +2170,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2207,9 +2201,8 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	if (!data || len <= 0)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
+	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+		return -EINVAL;
 
 	container = vdev->group->container;
 	driver = container->iommu_driver;
@@ -2219,9 +2212,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2224,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2231,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2241,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2248,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2276,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2285,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2344,6 +2297,9 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2368,12 +2324,15 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	if (!nb)
 		return -EINVAL;
 
+	if (WARN_ON(!READ_ONCE(dev->open_count)))
+		return -EINVAL;
+
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 141+ messages in thread

* Re: [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
@ 2022-04-13  5:52   ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Tue, Apr 12, 2022 at 12:53:27PM -0300, Jason Gunthorpe wrote:
> There is a conflict with Christoph's gvt rework here:
> 
>  https://lore.kernel.org/all/20220411141403.86980-1-hch@lst.de/
> 
> I've organized this so it is independent of Christoph's series, by adding
> the temporary mdev_legacy_get_vfio_device(), however it is easy for me to
> rebase. We can decide what to do as we see what becomes mergable. My
> preference would be to see Christoph's series merged into the drm&vfio
> trees and we do both series this cycle.

The hacks for the unconverted gvt are a real mess, so I hope we can
finish the gvt conversion first.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-13  5:52   ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Tue, Apr 12, 2022 at 12:53:27PM -0300, Jason Gunthorpe wrote:
> There is a conflict with Christoph's gvt rework here:
> 
>  https://lore.kernel.org/all/20220411141403.86980-1-hch@lst.de/
> 
> I've organized this so it is independent of Christoph's series, by adding
> the temporary mdev_legacy_get_vfio_device(), however it is easy for me to
> rebase. We can decide what to do as we see what becomes mergable. My
> preference would be to see Christoph's series merged into the drm&vfio
> trees and we do both series this cycle.

The hacks for the unconverted gvt are a real mess, so I hope we can
finish the gvt conversion first.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  5:55     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Tue, Apr 12, 2022 at 12:53:28PM -0300, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().

Instead of bothering the drivers with the notifiers at all, the two
notifier_blocks should move into struct vfio_device, and the
vfio_ops should just grow two new dma_unmap and set_kvm methods.

This will isolate the drives from the whole notifiers mess and it's
boilerplate code.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13  5:55     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Tue, Apr 12, 2022 at 12:53:28PM -0300, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().

Instead of bothering the drivers with the notifiers at all, the two
notifier_blocks should move into struct vfio_device, and the
vfio_ops should just grow two new dma_unmap and set_kvm methods.

This will isolate the drives from the whole notifiers mess and it's
boilerplate code.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  5:57     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:57 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

> -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  				  int npage, int prot, unsigned long *phys_pfn);
>  
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,

Please drop the externs when you touch this (also for the actual
header).

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-13  5:57     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:57 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  				  int npage, int prot, unsigned long *phys_pfn);
>  
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,

Please drop the externs when you touch this (also for the actual
header).

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  5:59     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:59 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Tue, Apr 12, 2022 at 12:53:33PM -0300, Jason Gunthorpe wrote:
> try_module_get() must be undone if kvmgt_guest_init() fails or we leak the
> module reference count on the failure path since the close_device op is
> never called in this case.
> 
> Fixes: 9bdb073464d6 ("drm/i915/gvt: Change KVMGT as self load module")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

This is all gone with the i915 refactor.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind
@ 2022-04-13  5:59     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  5:59 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Tue, Apr 12, 2022 at 12:53:33PM -0300, Jason Gunthorpe wrote:
> try_module_get() must be undone if kvmgt_guest_init() fails or we leak the
> module reference count on the failure path since the close_device op is
> never called in this case.
> 
> Fixes: 9bdb073464d6 ("drm/i915/gvt: Change KVMGT as self load module")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

This is all gone with the i915 refactor.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  6:00     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:00 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

This looks good execept the extern nitpick:

Reviewed-by: Christoph Hellwig <hch@lst.de>

However I'd move this before the previous patch.  More of the explanation
there.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-13  6:00     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:00 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

This looks good execept the extern nitpick:

Reviewed-by: Christoph Hellwig <hch@lst.de>

However I'd move this before the previous patch.  More of the explanation
there.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  6:01     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Tue, Apr 12, 2022 at 12:53:31PM -0300, Jason Gunthorpe wrote:
> Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> reason to use a group interface here, kvmgt has easy access to a
> vfio_device.

Once this is moved after the vfio_dma_rw, this is the last user of
the vfio_group, and I think it owuld make sense to merge it with the
patch to remove the vfio_group instead of leaving that around once
unused.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-13  6:01     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Tue, Apr 12, 2022 at 12:53:31PM -0300, Jason Gunthorpe wrote:
> Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> reason to use a group interface here, kvmgt has easy access to a
> vfio_device.

Once this is moved after the vfio_dma_rw, this is the last user of
the vfio_group, and I think it owuld make sense to merge it with the
patch to remove the vfio_group instead of leaving that around once
unused.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 8/9] vfio: Remove dead code
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  6:01     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 8/9] vfio: Remove dead code
@ 2022-04-13  6:01     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-12 15:53   ` Jason Gunthorpe
@ 2022-04-13  6:11     ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:11 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> +		return -EINVAL;

I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
otherwise there will be too many backtraces to be useful if a driver
ever gets the API wrong.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-13  6:11     ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13  6:11 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> +		return -EINVAL;

I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
otherwise there will be too many backtraces to be useful if a driver
ever gets the API wrong.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13  5:55     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 11:39       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 07:55:24AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:28PM -0300, Jason Gunthorpe wrote:
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> Instead of bothering the drivers with the notifiers at all, the two
> notifier_blocks should move into struct vfio_device, and the
> vfio_ops should just grow two new dma_unmap and set_kvm methods.
> 
> This will isolate the drives from the whole notifiers mess and it's
> boilerplate code.

I already looked into that for a while, it is a real mess too because
of how the notifiers are used by the current drivers, eg gvt assumes
the notifier is called during its open_device callback to setup its
kvm.

The unmap is less convoluted and I might still try to do that..

For this series I prefer to leave it alone

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 11:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 07:55:24AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:28PM -0300, Jason Gunthorpe wrote:
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> Instead of bothering the drivers with the notifiers at all, the two
> notifier_blocks should move into struct vfio_device, and the
> vfio_ops should just grow two new dma_unmap and set_kvm methods.
> 
> This will isolate the drives from the whole notifiers mess and it's
> boilerplate code.

I already looked into that for a while, it is a real mess too because
of how the notifiers are used by the current drivers, eg gvt assumes
the notifier is called during its open_device callback to setup its
kvm.

The unmap is less convoluted and I might still try to do that..

For this series I prefer to leave it alone

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 11:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 07:55:24AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:28PM -0300, Jason Gunthorpe wrote:
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> Instead of bothering the drivers with the notifiers at all, the two
> notifier_blocks should move into struct vfio_device, and the
> vfio_ops should just grow two new dma_unmap and set_kvm methods.
> 
> This will isolate the drives from the whole notifiers mess and it's
> boilerplate code.

I already looked into that for a while, it is a real mess too because
of how the notifiers are used by the current drivers, eg gvt assumes
the notifier is called during its open_device callback to setup its
kvm.

The unmap is less convoluted and I might still try to do that..

For this series I prefer to leave it alone

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-13  5:57     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 11:40       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 07:57:17AM +0200, Christoph Hellwig wrote:
> > -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> >  				  int npage, int prot, unsigned long *phys_pfn);
> >  
> > -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> 
> Please drop the externs when you touch this (also for the actual
> header).

Alex has been asking to keep them in the H files for consistency

Removing from the docs should be fine though

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-13 11:40       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 07:57:17AM +0200, Christoph Hellwig wrote:
> > -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> >  				  int npage, int prot, unsigned long *phys_pfn);
> >  
> > -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> 
> Please drop the externs when you touch this (also for the actual
> header).

Alex has been asking to keep them in the H files for consistency

Removing from the docs should be fine though

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-13 11:40       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 11:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 07:57:17AM +0200, Christoph Hellwig wrote:
> > -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> >  				  int npage, int prot, unsigned long *phys_pfn);
> >  
> > -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> > +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> 
> Please drop the externs when you touch this (also for the actual
> header).

Alex has been asking to keep them in the H files for consistency

Removing from the docs should be fine though

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Make the rest of the VFIO driver interface use vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
                   ` (11 preceding siblings ...)
  (?)
@ 2022-04-13 12:31 ` Patchwork
  -1 siblings, 0 replies; 141+ messages in thread
From: Patchwork @ 2022-04-13 12:31 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device
URL   : https://patchwork.freedesktop.org/series/102606/
State : warning

== Summary ==

Error: dim checkpatch failed
1a0dcd52cf62 vfio: Make vfio_(un)register_notifier accept a vfio_device
-:33: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#33: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:919:
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
 				&vdev->iommu_notifier);

-:42: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#42: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:928:
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
 				&vdev->group_notifier);

-:51: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#51: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:966:
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);

-:56: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#56: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:970:
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);

-:74: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#74: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:1005:
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
 					&vdev->iommu_notifier);

-:80: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#80: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:1010:
+	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 					&vdev->group_notifier);

-:268: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#268: FILE: include/linux/vfio.h:181:
+extern int vfio_register_notifier(struct vfio_device *dev,

-:273: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#273: FILE: include/linux/vfio.h:185:
+extern int vfio_unregister_notifier(struct vfio_device *dev,

-:276: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 8 checks, 217 lines checked
6895636f3f14 vfio/ccw: Remove mdev from struct channel_program
-:206: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#206: FILE: drivers/s390/cio/vfio_ccw_cp.h:44:
+extern int cp_init(struct channel_program *cp, union orb *orb);

-:223: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 1 checks, 183 lines checked
a49bf1b00724 vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
-:209: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#209: FILE: include/linux/vfio.h:153:
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,

-:212: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#212: FILE: include/linux/vfio.h:155:
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,

-:215: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 2 checks, 169 lines checked
3c71cd007be5 drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
-:52: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 0 checks, 32 lines checked
868856efe678 vfio: Pass in a struct vfio_device * to vfio_dma_rw()
-:24: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#24: FILE: drivers/gpu/drm/i915/gvt/kvmgt.c:2187:
+	return vfio_dma_rw(

-:93: CHECK:AVOID_EXTERNS: extern prototypes should be avoided in .h files
#93: FILE: include/linux/vfio.h:164:
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,

-:96: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 2 checks, 67 lines checked
197e48b986e6 drm/i915/gvt: Add missing module_put() in error unwind
-:34: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 0 checks, 17 lines checked
f2ddbd56fbfa drm/i915/gvt: Delete kvmgt_vdev::vfio_group
-:70: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 0 checks, 52 lines checked
adaa03d8ab38 vfio: Remove dead code
-:224: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 0 checks, 195 lines checked
77d787fcf640 vfio: Remove calls to vfio_group_add_container_user()
-:199: WARNING:FROM_SIGN_OFF_MISMATCH: From:/Signed-off-by: email address mismatch: 'From: Jason Gunthorpe <jgg@ziepe.ca>' != 'Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>'

total: 0 errors, 1 warnings, 0 checks, 156 lines checked



^ permalink raw reply	[flat|nested] 141+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Make the rest of the VFIO driver interface use vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
                   ` (12 preceding siblings ...)
  (?)
@ 2022-04-13 12:31 ` Patchwork
  -1 siblings, 0 replies; 141+ messages in thread
From: Patchwork @ 2022-04-13 12:31 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device
URL   : https://patchwork.freedesktop.org/series/102606/
State : warning

== Summary ==

Error: dim sparse failed
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+./include/linux/find.h:40:31: warning: shift count is negative (-24)
+./include/linux/find.h:40:31: warning: shift count is negative (-24)
+./include/linux/find.h:40:31: warning: shift count is negative (-24)
+./include/linux/find.h:40:31: warning: shift count is negative (-24)
+./include/linux/find.h:40:31: warning: shift count is negative (-448)
+./include/linux/find.h:40:31: warning: shift count is negative (-448)



^ permalink raw reply	[flat|nested] 141+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Make the rest of the VFIO driver interface use vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
                   ` (13 preceding siblings ...)
  (?)
@ 2022-04-13 12:56 ` Patchwork
  -1 siblings, 0 replies; 141+ messages in thread
From: Patchwork @ 2022-04-13 12:56 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 7353 bytes --]

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device
URL   : https://patchwork.freedesktop.org/series/102606/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11493 -> Patchwork_102606v1
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/index.html

Participating hosts (48 -> 44)
------------------------------

  Additional (1): fi-pnv-d510 
  Missing    (5): fi-rkl-guc fi-bsw-cyan fi-icl-u2 bat-jsl-2 fi-bdw-samus 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_102606v1:

### IGT changes ###

#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_module_load@reload:
    - {bat-dg2-9}:        [PASS][1] -> [FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/bat-dg2-9/igt@i915_module_load@reload.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/bat-dg2-9/igt@i915_module_load@reload.html

  * igt@i915_pm_rpm@module-reload:
    - {bat-dg2-9}:        NOTRUN -> [SKIP][3] +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/bat-dg2-9/igt@i915_pm_rpm@module-reload.html

  * igt@i915_selftest@live@gem:
    - {bat-dg2-9}:        NOTRUN -> [FAIL][4] +34 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/bat-dg2-9/igt@i915_selftest@live@gem.html

  
Known issues
------------

  Here are the changes found in Patchwork_102606v1 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@hangcheck:
    - fi-snb-2600:        [PASS][5] -> [INCOMPLETE][6] ([i915#3921])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/fi-snb-2600/igt@i915_selftest@live@hangcheck.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/fi-snb-2600/igt@i915_selftest@live@hangcheck.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-c:
    - fi-pnv-d510:        NOTRUN -> [SKIP][7] ([fdo#109271] / [i915#5341])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/fi-pnv-d510/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-c.html

  * igt@prime_vgem@basic-userptr:
    - fi-pnv-d510:        NOTRUN -> [SKIP][8] ([fdo#109271]) +39 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/fi-pnv-d510/igt@prime_vgem@basic-userptr.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@hangcheck:
    - fi-hsw-4770:        [INCOMPLETE][9] ([i915#4785]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/fi-hsw-4770/igt@i915_selftest@live@hangcheck.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/fi-hsw-4770/igt@i915_selftest@live@hangcheck.html

  * igt@kms_busy@basic@modeset:
    - {bat-adlp-6}:       [DMESG-WARN][11] ([i915#3576]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/bat-adlp-6/igt@kms_busy@basic@modeset.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/bat-adlp-6/igt@kms_busy@basic@modeset.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
  [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
  [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
  [i915#2575]: https://gitlab.freedesktop.org/drm/intel/issues/2575
  [i915#2582]: https://gitlab.freedesktop.org/drm/intel/issues/2582
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3576]: https://gitlab.freedesktop.org/drm/intel/issues/3576
  [i915#3595]: https://gitlab.freedesktop.org/drm/intel/issues/3595
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3921]: https://gitlab.freedesktop.org/drm/intel/issues/3921
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4215]: https://gitlab.freedesktop.org/drm/intel/issues/4215
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4785]: https://gitlab.freedesktop.org/drm/intel/issues/4785
  [i915#4873]: https://gitlab.freedesktop.org/drm/intel/issues/4873
  [i915#4897]: https://gitlab.freedesktop.org/drm/intel/issues/4897
  [i915#5171]: https://gitlab.freedesktop.org/drm/intel/issues/5171
  [i915#5174]: https://gitlab.freedesktop.org/drm/intel/issues/5174
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5193]: https://gitlab.freedesktop.org/drm/intel/issues/5193
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5275]: https://gitlab.freedesktop.org/drm/intel/issues/5275
  [i915#5341]: https://gitlab.freedesktop.org/drm/intel/issues/5341
  [i915#5606]: https://gitlab.freedesktop.org/drm/intel/issues/5606
  [i915#5608]: https://gitlab.freedesktop.org/drm/intel/issues/5608


Build changes
-------------

  * Linux: CI_DRM_11493 -> Patchwork_102606v1

  CI-20190529: 20190529
  CI_DRM_11493: 83f019abc2a24a2753dea6eb4416a8210c79adf1 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6420: a3885810ccc0ce9e6552a20c910a0a322eca466c @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_102606v1: 102606v1 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

488fe176aa24 vfio: Remove calls to vfio_group_add_container_user()
a2a213b66f51 vfio: Remove dead code
6b49c8e3df85 drm/i915/gvt: Delete kvmgt_vdev::vfio_group
cf1839dd7ebe drm/i915/gvt: Add missing module_put() in error unwind
cd5c7b446449 vfio: Pass in a struct vfio_device * to vfio_dma_rw()
67881707aece drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
d620dd83a5ca vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
253ab4e1fe51 vfio/ccw: Remove mdev from struct channel_program
8a2b87987e7a vfio: Make vfio_(un)register_notifier accept a vfio_device

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/index.html

[-- Attachment #2: Type: text/html, Size: 5854 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-13  6:00     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 13:39       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 08:00:08AM +0200, Christoph Hellwig wrote:
> This looks good execept the extern nitpick:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> However I'd move this before the previous patch.  More of the explanation
> there.

Yes, that looks good, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-13 13:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 08:00:08AM +0200, Christoph Hellwig wrote:
> This looks good execept the extern nitpick:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> However I'd move this before the previous patch.  More of the explanation
> there.

Yes, that looks good, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-13 13:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 08:00:08AM +0200, Christoph Hellwig wrote:
> This looks good execept the extern nitpick:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 
> However I'd move this before the previous patch.  More of the explanation
> there.

Yes, that looks good, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-13  6:01     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 13:39       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 08:01:10AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:31PM -0300, Jason Gunthorpe wrote:
> > Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> > reason to use a group interface here, kvmgt has easy access to a
> > vfio_device.
> 
> Once this is moved after the vfio_dma_rw, this is the last user of
> the vfio_group, and I think it owuld make sense to merge it with the
> patch to remove the vfio_group instead of leaving that around once
> unused.

Done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-13 13:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 08:01:10AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:31PM -0300, Jason Gunthorpe wrote:
> > Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> > reason to use a group interface here, kvmgt has easy access to a
> > vfio_device.
> 
> Once this is moved after the vfio_dma_rw, this is the last user of
> the vfio_group, and I think it owuld make sense to merge it with the
> patch to remove the vfio_group instead of leaving that around once
> unused.

Done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-13 13:39       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 13:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 08:01:10AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:31PM -0300, Jason Gunthorpe wrote:
> > Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> > reason to use a group interface here, kvmgt has easy access to a
> > vfio_device.
> 
> Once this is moved after the vfio_dma_rw, this is the last user of
> the vfio_group, and I think it owuld make sense to merge it with the
> patch to remove the vfio_group instead of leaving that around once
> unused.

Done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-13  6:11     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 14:03       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 14:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 08:11:05AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> > +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> > +		return -EINVAL;
> 
> I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
> otherwise there will be too many backtraces to be useful if a driver
> ever gets the API wrong.

Sure, I added a wrapper to make that have less overhead and merged it
with the other 'driver is calling this correctly' checks:

@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,6 +1550,7 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
+	vfio_assert_device_open(device);
 	if (!--device->open_count && device->ops->close_device)
 		device->ops->close_device(device);
 	mutex_unlock(&device->dev_set->lock);
@@ -2112,7 +2119,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,9 +2128,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2153,15 +2157,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2198,10 +2199,7 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	container = vdev->group->container;
@@ -2294,10 +2292,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2321,10 +2316,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-13 14:03       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 14:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 08:11:05AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> > +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> > +		return -EINVAL;
> 
> I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
> otherwise there will be too many backtraces to be useful if a driver
> ever gets the API wrong.

Sure, I added a wrapper to make that have less overhead and merged it
with the other 'driver is calling this correctly' checks:

@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,6 +1550,7 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
+	vfio_assert_device_open(device);
 	if (!--device->open_count && device->ops->close_device)
 		device->ops->close_device(device);
 	mutex_unlock(&device->dev_set->lock);
@@ -2112,7 +2119,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,9 +2128,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2153,15 +2157,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2198,10 +2199,7 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	container = vdev->group->container;
@@ -2294,10 +2292,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2321,10 +2316,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-13 14:03       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 14:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 08:11:05AM +0200, Christoph Hellwig wrote:
> On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> > +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> > +		return -EINVAL;
> 
> I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
> otherwise there will be too many backtraces to be useful if a driver
> ever gets the API wrong.

Sure, I added a wrapper to make that have less overhead and merged it
with the other 'driver is calling this correctly' checks:

@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,6 +1550,7 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
+	vfio_assert_device_open(device);
 	if (!--device->open_count && device->ops->close_device)
 		device->ops->close_device(device);
 	mutex_unlock(&device->dev_set->lock);
@@ -2112,7 +2119,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,9 +2128,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2153,15 +2157,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
-		return -EINVAL;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2198,10 +2199,7 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(vdev->open_count)))
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	container = vdev->group->container;
@@ -2294,10 +2292,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2321,10 +2316,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
-		return -EINVAL;
-
-	if (WARN_ON(!READ_ONCE(dev->open_count)))
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {

Thanks,
Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for Make the rest of the VFIO driver interface use vfio_device
  2022-04-12 15:53 ` Jason Gunthorpe
                   ` (14 preceding siblings ...)
  (?)
@ 2022-04-13 15:21 ` Patchwork
  -1 siblings, 0 replies; 141+ messages in thread
From: Patchwork @ 2022-04-13 15:21 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30285 bytes --]

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device
URL   : https://patchwork.freedesktop.org/series/102606/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11493_full -> Patchwork_102606v1_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 11)
------------------------------

  Missing    (2): shard-rkl shard-dg1 

Known issues
------------

  Here are the changes found in Patchwork_102606v1_full that come from known issues:

### CI changes ###

#### Possible fixes ####

  * boot:
    - shard-skl:          ([PASS][1], [PASS][2], [PASS][3], [PASS][4], [PASS][5], [PASS][6], [PASS][7], [PASS][8], [PASS][9], [PASS][10], [FAIL][11], [PASS][12], [PASS][13], [PASS][14], [PASS][15], [PASS][16], [PASS][17], [PASS][18], [PASS][19]) ([i915#5032]) -> ([PASS][20], [PASS][21], [PASS][22], [PASS][23], [PASS][24], [PASS][25], [PASS][26], [PASS][27], [PASS][28], [PASS][29], [PASS][30], [PASS][31], [PASS][32], [PASS][33], [PASS][34], [PASS][35], [PASS][36], [PASS][37], [PASS][38], [PASS][39], [PASS][40], [PASS][41], [PASS][42], [PASS][43])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl9/boot.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl9/boot.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl8/boot.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl8/boot.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl7/boot.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl7/boot.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl6/boot.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl6/boot.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl6/boot.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl5/boot.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl5/boot.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl4/boot.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl4/boot.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl3/boot.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl3/boot.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl1/boot.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl1/boot.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl10/boot.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl10/boot.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl6/boot.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl9/boot.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl9/boot.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl9/boot.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/boot.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/boot.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/boot.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl7/boot.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl7/boot.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl6/boot.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl6/boot.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/boot.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/boot.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl4/boot.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl4/boot.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl4/boot.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/boot.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/boot.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl2/boot.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl2/boot.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl1/boot.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl1/boot.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl10/boot.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl10/boot.html
    - shard-glk:          ([PASS][44], [PASS][45], [PASS][46], [PASS][47], [PASS][48], [PASS][49], [PASS][50], [PASS][51], [PASS][52], [PASS][53], [PASS][54], [PASS][55], [PASS][56], [PASS][57], [PASS][58], [PASS][59], [PASS][60], [PASS][61], [PASS][62], [PASS][63], [FAIL][64], [PASS][65], [PASS][66], [PASS][67], [PASS][68]) ([i915#4392]) -> ([PASS][69], [PASS][70], [PASS][71], [PASS][72], [PASS][73], [PASS][74], [PASS][75], [PASS][76], [PASS][77], [PASS][78], [PASS][79], [PASS][80], [PASS][81], [PASS][82], [PASS][83], [PASS][84], [PASS][85], [PASS][86], [PASS][87], [PASS][88], [PASS][89], [PASS][90], [PASS][91], [PASS][92], [PASS][93])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk1/boot.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk1/boot.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk2/boot.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk2/boot.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk2/boot.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk3/boot.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk3/boot.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk4/boot.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk4/boot.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk4/boot.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk5/boot.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk5/boot.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk5/boot.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk6/boot.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk6/boot.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk6/boot.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk7/boot.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk7/boot.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk7/boot.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk8/boot.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk8/boot.html
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk8/boot.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk9/boot.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk9/boot.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk9/boot.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk9/boot.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk9/boot.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk9/boot.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk8/boot.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk8/boot.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk8/boot.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk7/boot.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk7/boot.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk7/boot.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk6/boot.html
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk6/boot.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk5/boot.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk5/boot.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk5/boot.html
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk4/boot.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk4/boot.html
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk4/boot.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/boot.html
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/boot.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/boot.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk2/boot.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk2/boot.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk1/boot.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk1/boot.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk1/boot.html

  

### IGT changes ###

#### Issues hit ####

  * igt@core_hotunplug@unbind-rebind:
    - shard-apl:          [PASS][94] -> [DMESG-WARN][95] ([i915#5437])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-apl8/igt@core_hotunplug@unbind-rebind.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl4/igt@core_hotunplug@unbind-rebind.html

  * igt@gem_ccs@ctrl-surf-copy-new-ctx:
    - shard-iclb:         NOTRUN -> [SKIP][96] ([i915#5327])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@gem_ccs@ctrl-surf-copy-new-ctx.html

  * igt@gem_eio@unwedge-stress:
    - shard-iclb:         [PASS][97] -> [TIMEOUT][98] ([i915#2481] / [i915#3070])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-iclb3/igt@gem_eio@unwedge-stress.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb1/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglb:         [PASS][99] -> [FAIL][100] ([i915#2842])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-tglb7/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-tglb1/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_params@no-blt:
    - shard-iclb:         NOTRUN -> [SKIP][101] ([fdo#109283])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@gem_exec_params@no-blt.html

  * igt@gem_lmem_swapping@basic:
    - shard-kbl:          NOTRUN -> [SKIP][102] ([fdo#109271] / [i915#4613]) +1 similar issue
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@gem_lmem_swapping@basic.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - shard-skl:          NOTRUN -> [SKIP][103] ([fdo#109271] / [i915#4613]) +1 similar issue
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@gem_lmem_swapping@random-engines:
    - shard-iclb:         NOTRUN -> [SKIP][104] ([i915#4613])
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@gem_lmem_swapping@random-engines.html

  * igt@gem_lmem_swapping@smem-oom:
    - shard-apl:          NOTRUN -> [SKIP][105] ([fdo#109271] / [i915#4613])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl3/igt@gem_lmem_swapping@smem-oom.html

  * igt@gem_mmap_gtt@coherency:
    - shard-snb:          [PASS][106] -> [SKIP][107] ([fdo#109271])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-snb4/igt@gem_mmap_gtt@coherency.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-snb6/igt@gem_mmap_gtt@coherency.html

  * igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-y-tiled:
    - shard-iclb:         NOTRUN -> [SKIP][108] ([i915#768])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-y-tiled.html

  * igt@gem_userptr_blits@readonly-pwrite-unsync:
    - shard-iclb:         NOTRUN -> [SKIP][109] ([i915#3297])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@gem_userptr_blits@readonly-pwrite-unsync.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-apl:          [PASS][110] -> [DMESG-WARN][111] ([i915#5566] / [i915#716])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-apl2/igt@gen9_exec_parse@allowed-single.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl7/igt@gen9_exec_parse@allowed-single.html

  * igt@i915_pm_lpsp@screens-disabled:
    - shard-iclb:         NOTRUN -> [SKIP][112] ([i915#1902])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@i915_pm_lpsp@screens-disabled.html

  * igt@kms_async_flips@alternate-sync-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][113] ([i915#2521])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl2/igt@kms_async_flips@alternate-sync-async-flip.html

  * igt@kms_big_fb@4-tiled-16bpp-rotate-90:
    - shard-glk:          NOTRUN -> [SKIP][114] ([fdo#109271]) +49 similar issues
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/igt@kms_big_fb@4-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
    - shard-iclb:         NOTRUN -> [SKIP][115] ([i915#5286])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@linear-32bpp-rotate-0:
    - shard-glk:          [PASS][116] -> [DMESG-WARN][117] ([i915#118])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk4/igt@kms_big_fb@linear-32bpp-rotate-0.html
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk1/igt@kms_big_fb@linear-32bpp-rotate-0.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-kbl:          NOTRUN -> [SKIP][118] ([fdo#109271] / [i915#3777])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-skl:          NOTRUN -> [SKIP][119] ([fdo#109271] / [i915#3777]) +4 similar issues
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
    - shard-apl:          NOTRUN -> [SKIP][120] ([fdo#109271] / [i915#3777]) +1 similar issue
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl3/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][121] ([i915#3743]) +3 similar issues
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-skl:          NOTRUN -> [FAIL][122] ([i915#3763])
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][123] ([fdo#109271] / [i915#3886]) +4 similar issues
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_ccs@pipe-a-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs_cc:
    - shard-iclb:         NOTRUN -> [SKIP][124] ([fdo#109278] / [i915#3886]) +2 similar issues
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_ccs@pipe-a-random-ccs-data-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
    - shard-skl:          NOTRUN -> [SKIP][125] ([fdo#109271] / [i915#3886]) +11 similar issues
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/igt@kms_ccs@pipe-b-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-crc-sprite-planes-basic-y_tiled_gen12_rc_ccs_cc:
    - shard-glk:          NOTRUN -> [SKIP][126] ([fdo#109271] / [i915#3886]) +2 similar issues
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/igt@kms_ccs@pipe-b-crc-sprite-planes-basic-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][127] ([fdo#109271] / [i915#3886])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl3/igt@kms_ccs@pipe-c-ccs-on-another-bo-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium@hdmi-crc-nonplanar-formats:
    - shard-iclb:         NOTRUN -> [SKIP][128] ([fdo#109284] / [fdo#111827]) +2 similar issues
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@kms_chamelium@hdmi-crc-nonplanar-formats.html

  * igt@kms_chamelium@vga-hpd-after-suspend:
    - shard-skl:          NOTRUN -> [SKIP][129] ([fdo#109271] / [fdo#111827]) +27 similar issues
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl8/igt@kms_chamelium@vga-hpd-after-suspend.html

  * igt@kms_color_chamelium@pipe-b-ctm-0-25:
    - shard-kbl:          NOTRUN -> [SKIP][130] ([fdo#109271] / [fdo#111827]) +3 similar issues
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_color_chamelium@pipe-b-ctm-0-25.html

  * igt@kms_color_chamelium@pipe-b-ctm-0-5:
    - shard-glk:          NOTRUN -> [SKIP][131] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/igt@kms_color_chamelium@pipe-b-ctm-0-5.html

  * igt@kms_color_chamelium@pipe-b-ctm-limited-range:
    - shard-apl:          NOTRUN -> [SKIP][132] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl1/igt@kms_color_chamelium@pipe-b-ctm-limited-range.html

  * igt@kms_color_chamelium@pipe-d-gamma:
    - shard-iclb:         NOTRUN -> [SKIP][133] ([fdo#109278] / [fdo#109284] / [fdo#111827])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_color_chamelium@pipe-d-gamma.html

  * igt@kms_content_protection@lic:
    - shard-kbl:          NOTRUN -> [TIMEOUT][134] ([i915#1319])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-random:
    - shard-iclb:         NOTRUN -> [SKIP][135] ([fdo#109278] / [fdo#109279]) +1 similar issue
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_cursor_crc@pipe-b-cursor-512x512-random.html

  * igt@kms_cursor_crc@pipe-c-cursor-512x170-sliding:
    - shard-skl:          NOTRUN -> [SKIP][136] ([fdo#109271] / [i915#1888] / [i915#5691])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/igt@kms_cursor_crc@pipe-c-cursor-512x170-sliding.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
    - shard-iclb:         NOTRUN -> [SKIP][137] ([fdo#109274] / [fdo#109278]) +1 similar issue
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
    - shard-skl:          NOTRUN -> [FAIL][138] ([i915#2346] / [i915#533])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@pipe-d-single-bo:
    - shard-kbl:          NOTRUN -> [SKIP][139] ([fdo#109271] / [i915#533])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_cursor_legacy@pipe-d-single-bo.html

  * igt@kms_cursor_legacy@pipe-d-single-move:
    - shard-iclb:         NOTRUN -> [SKIP][140] ([fdo#109278]) +14 similar issues
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@kms_cursor_legacy@pipe-d-single-move.html

  * igt@kms_flip@2x-flip-vs-fences-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][141] ([fdo#109274])
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_flip@2x-flip-vs-fences-interruptible.html

  * igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1:
    - shard-glk:          [PASS][142] -> [FAIL][143] ([i915#79])
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-glk1/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk2/igt@kms_flip@flip-vs-expired-vblank@b-hdmi-a1.html

  * igt@kms_flip@flip-vs-suspend@a-dp1:
    - shard-apl:          [PASS][144] -> [DMESG-WARN][145] ([i915#180]) +3 similar issues
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-apl3/igt@kms_flip@flip-vs-suspend@a-dp1.html
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl2/igt@kms_flip@flip-vs-suspend@a-dp1.html

  * igt@kms_flip@plain-flip-fb-recreate@b-edp1:
    - shard-skl:          [PASS][146] -> [FAIL][147] ([i915#2122]) +5 similar issues
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl7/igt@kms_flip@plain-flip-fb-recreate@b-edp1.html
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/igt@kms_flip@plain-flip-fb-recreate@b-edp1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling:
    - shard-kbl:          NOTRUN -> [SKIP][148] ([fdo#109271]) +84 similar issues
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-mmap-cpu:
    - shard-iclb:         NOTRUN -> [SKIP][149] ([fdo#109280]) +7 similar issues
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch-dpms@bpc-switch-dpms-edp-1-pipe-a:
    - shard-skl:          [PASS][150] -> [FAIL][151] ([i915#1188])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl8/igt@kms_hdr@bpc-switch-dpms@bpc-switch-dpms-edp-1-pipe-a.html
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl4/igt@kms_hdr@bpc-switch-dpms@bpc-switch-dpms-edp-1-pipe-a.html

  * igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c:
    - shard-iclb:         NOTRUN -> [SKIP][152] ([fdo#109289])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_pipe_b_c_ivb@disable-pipe-b-enable-pipe-c.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
    - shard-skl:          NOTRUN -> [SKIP][153] ([fdo#109271] / [i915#533]) +1 similar issue
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl2/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html

  * igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes:
    - shard-kbl:          NOTRUN -> [DMESG-WARN][154] ([i915#180])
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl1/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b-planes.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb:
    - shard-skl:          NOTRUN -> [FAIL][155] ([fdo#108145] / [i915#265]) +4 similar issues
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-skl:          NOTRUN -> [FAIL][156] ([i915#265])
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [PASS][157] -> [FAIL][158] ([fdo#108145] / [i915#265])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl6/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_plane_lowres@pipe-c-tiling-x:
    - shard-iclb:         NOTRUN -> [SKIP][159] ([i915#3536])
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_plane_lowres@pipe-c-tiling-x.html

  * igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-edp-1-scaler-with-clipping-clamping:
    - shard-skl:          [PASS][160] -> [DMESG-WARN][161] ([i915#1982])
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-skl4/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-edp-1-scaler-with-clipping-clamping.html
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl4/igt@kms_plane_scaling@scaler-with-clipping-clamping@pipe-c-edp-1-scaler-with-clipping-clamping.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area:
    - shard-skl:          NOTRUN -> [SKIP][162] ([fdo#109271] / [i915#658]) +2 similar issues
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl10/igt@kms_psr2_sf@plane-move-sf-dmg-area.html

  * igt@kms_psr2_su@frontbuffer-xrgb8888:
    - shard-iclb:         [PASS][163] -> [SKIP][164] ([fdo#109642] / [fdo#111068] / [i915#658])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-iclb2/igt@kms_psr2_su@frontbuffer-xrgb8888.html
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb8/igt@kms_psr2_su@frontbuffer-xrgb8888.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-kbl:          NOTRUN -> [SKIP][165] ([fdo#109271] / [i915#658])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl6/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         NOTRUN -> [SKIP][166] ([fdo#109441])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_psr@psr2_sprite_plane_move:
    - shard-iclb:         [PASS][167] -> [SKIP][168] ([fdo#109441]) +1 similar issue
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-iclb2/igt@kms_psr@psr2_sprite_plane_move.html
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb6/igt@kms_psr@psr2_sprite_plane_move.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - shard-apl:          NOTRUN -> [SKIP][169] ([fdo#109271]) +44 similar issues
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-apl3/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-a:
    - shard-skl:          NOTRUN -> [SKIP][170] ([fdo#109271]) +358 similar issues
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl7/igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-a.html

  * igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-c:
    - shard-iclb:         NOTRUN -> [SKIP][171] ([i915#5030]) +2 similar issues
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_scaling_modes@scaling-mode-none@edp-1-pipe-c.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-kbl:          [PASS][172] -> [DMESG-WARN][173] ([i915#180])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11493/shard-kbl6/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-kbl1/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  * igt@kms_vrr@flip-basic:
    - shard-iclb:         NOTRUN -> [SKIP][174] ([i915#3555])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb5/igt@kms_vrr@flip-basic.html

  * igt@kms_writeback@writeback-fb-id:
    - shard-glk:          NOTRUN -> [SKIP][175] ([fdo#109271] / [i915#2437])
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-glk3/igt@kms_writeback@writeback-fb-id.html
    - shard-skl:          NOTRUN -> [SKIP][176] ([fdo#109271] / [i915#2437])
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl3/igt@kms_writeback@writeback-fb-id.html

  * igt@perf@polling-small-buf:
    - shard-skl:          NOTRUN -> [FAIL][177] ([i915#1722])
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-skl5/igt@perf@polling-small-buf.html

  * igt@prime_nv_api@nv_i915_reimport_twice_check_flink_name:
    - shard-iclb:         NOTRUN -> [SKIP][178] ([fdo#109291]) +1 similar issue
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/shard-iclb3/igt@prime_n

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_102606v1/index.html

[-- Attachment #2: Type: text/html, Size: 33122 bytes --]

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 11:39       ` Jason Gunthorpe
@ 2022-04-13 16:06         ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:06 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian,
	Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 08:39:52AM -0300, Jason Gunthorpe wrote:
> I already looked into that for a while, it is a real mess too because
> of how the notifiers are used by the current drivers, eg gvt assumes
> the notifier is called during its open_device callback to setup its
> kvm.

gvt very much expects kvm to be set before open and thus get the
cachup notifier, yes.  And given that this is how qemu uses
the ioctl I think we can actually simplify this further and require
vfio_group_set_kvm to be called before open for s390/ap as well and
do away with this whole mess.

> For this series I prefer to leave it alone

Ok, let's do it one step at a time.


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 16:06         ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:06 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 08:39:52AM -0300, Jason Gunthorpe wrote:
> I already looked into that for a while, it is a real mess too because
> of how the notifiers are used by the current drivers, eg gvt assumes
> the notifier is called during its open_device callback to setup its
> kvm.

gvt very much expects kvm to be set before open and thus get the
cachup notifier, yes.  And given that this is how qemu uses
the ioctl I think we can actually simplify this further and require
vfio_group_set_kvm to be called before open for s390/ap as well and
do away with this whole mess.

> For this series I prefer to leave it alone

Ok, let's do it one step at a time.


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-13 14:03       ` Jason Gunthorpe
@ 2022-04-13 16:07         ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:07 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian,
	Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 11:03:05AM -0300, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 08:11:05AM +0200, Christoph Hellwig wrote:
> > On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> > > +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> > > +		return -EINVAL;
> > 
> > I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
> > otherwise there will be too many backtraces to be useful if a driver
> > ever gets the API wrong.
> 
> Sure, I added a wrapper to make that have less overhead and merged it
> with the other 'driver is calling this correctly' checks:

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-13 16:07         ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:07 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 11:03:05AM -0300, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 08:11:05AM +0200, Christoph Hellwig wrote:
> > On Tue, Apr 12, 2022 at 12:53:36PM -0300, Jason Gunthorpe wrote:
> > > +	if (WARN_ON(!READ_ONCE(vdev->open_count)))
> > > +		return -EINVAL;
> > 
> > I think all the WARN_ON()s in this patch need to be WARN_ON_ONCE,
> > otherwise there will be too many backtraces to be useful if a driver
> > ever gets the API wrong.
> 
> Sure, I added a wrapper to make that have less overhead and merged it
> with the other 'driver is calling this correctly' checks:

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 16:06         ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 16:18           ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 16:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 06:06:01PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 08:39:52AM -0300, Jason Gunthorpe wrote:
> > I already looked into that for a while, it is a real mess too because
> > of how the notifiers are used by the current drivers, eg gvt assumes
> > the notifier is called during its open_device callback to setup its
> > kvm.
> 
> gvt very much expects kvm to be set before open and thus get the
> cachup notifier, yes.  And given that this is how qemu uses
> the ioctl I think we can actually simplify this further and require
> vfio_group_set_kvm to be called before open for s390/ap as well and
> do away with this whole mess.

Yeah, I was thinking about that too, but on the other hand I think it
is completely wrong that gvt requires kvm at all. A vfio_device is not
supposed to be tightly linked to KVM - the only exception possibly
being s390..

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 16:18           ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 16:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 06:06:01PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 08:39:52AM -0300, Jason Gunthorpe wrote:
> > I already looked into that for a while, it is a real mess too because
> > of how the notifiers are used by the current drivers, eg gvt assumes
> > the notifier is called during its open_device callback to setup its
> > kvm.
> 
> gvt very much expects kvm to be set before open and thus get the
> cachup notifier, yes.  And given that this is how qemu uses
> the ioctl I think we can actually simplify this further and require
> vfio_group_set_kvm to be called before open for s390/ap as well and
> do away with this whole mess.

Yeah, I was thinking about that too, but on the other hand I think it
is completely wrong that gvt requires kvm at all. A vfio_device is not
supposed to be tightly linked to KVM - the only exception possibly
being s390..

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 16:18           ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 16:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 06:06:01PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 08:39:52AM -0300, Jason Gunthorpe wrote:
> > I already looked into that for a while, it is a real mess too because
> > of how the notifiers are used by the current drivers, eg gvt assumes
> > the notifier is called during its open_device callback to setup its
> > kvm.
> 
> gvt very much expects kvm to be set before open and thus get the
> cachup notifier, yes.  And given that this is how qemu uses
> the ioctl I think we can actually simplify this further and require
> vfio_group_set_kvm to be called before open for s390/ap as well and
> do away with this whole mess.

Yeah, I was thinking about that too, but on the other hand I think it
is completely wrong that gvt requires kvm at all. A vfio_device is not
supposed to be tightly linked to KVM - the only exception possibly
being s390..

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 16:18           ` Jason Gunthorpe
@ 2022-04-13 16:29             ` Christoph Hellwig
  -1 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian,
	Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> Yeah, I was thinking about that too, but on the other hand I think it
> is completely wrong that gvt requires kvm at all. A vfio_device is not
> supposed to be tightly linked to KVM - the only exception possibly
> being s390..

So i915/gvt uses it for:

 - poking into the KVM GFN translations
 - using the KVM page track notifier

No idea how these could be solved in a more generic way.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 16:29             ` Christoph Hellwig
  0 siblings, 0 replies; 141+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:29 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> Yeah, I was thinking about that too, but on the other hand I think it
> is completely wrong that gvt requires kvm at all. A vfio_device is not
> supposed to be tightly linked to KVM - the only exception possibly
> being s390..

So i915/gvt uses it for:

 - poking into the KVM GFN translations
 - using the KVM page track notifier

No idea how these could be solved in a more generic way.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 16:29             ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-13 17:37               ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 17:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tian, Kevin, Liu, Yi L

On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > Yeah, I was thinking about that too, but on the other hand I think it
> > is completely wrong that gvt requires kvm at all. A vfio_device is not
> > supposed to be tightly linked to KVM - the only exception possibly
> > being s390..
> 
> So i915/gvt uses it for:
> 
>  - poking into the KVM GFN translations
>  - using the KVM page track notifier
> 
> No idea how these could be solved in a more generic way.

TBH I'm not sure how any of this works fully correctly..

I see this code getting something it calls a GFN and then passing
them to vfio - which makes no sense. Either a value is a GFN - the
physical memory address of the VM, or it is an IOVA. VFIO only takes
in IOVA and kvm only takes in GFN. So these are probably IOVAs really..

But then, I see this code taking GFNs (which are probably IOVAs?) and
passing them to the kvm page track notifier? That can't be right, VFIO
needs to translate the IOVA to a GFN, not assume 1:1...

It seems the purpose is to shadow a page table, and it is capturing
user space CPU writes to this page table memory I guess?

GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
some guest page table?

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 17:37               ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 17:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > Yeah, I was thinking about that too, but on the other hand I think it
> > is completely wrong that gvt requires kvm at all. A vfio_device is not
> > supposed to be tightly linked to KVM - the only exception possibly
> > being s390..
> 
> So i915/gvt uses it for:
> 
>  - poking into the KVM GFN translations
>  - using the KVM page track notifier
> 
> No idea how these could be solved in a more generic way.

TBH I'm not sure how any of this works fully correctly..

I see this code getting something it calls a GFN and then passing
them to vfio - which makes no sense. Either a value is a GFN - the
physical memory address of the VM, or it is an IOVA. VFIO only takes
in IOVA and kvm only takes in GFN. So these are probably IOVAs really..

But then, I see this code taking GFNs (which are probably IOVAs?) and
passing them to the kvm page track notifier? That can't be right, VFIO
needs to translate the IOVA to a GFN, not assume 1:1...

It seems the purpose is to shadow a page table, and it is capturing
user space CPU writes to this page table memory I guess?

GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
some guest page table?

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 17:37               ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 17:37 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > Yeah, I was thinking about that too, but on the other hand I think it
> > is completely wrong that gvt requires kvm at all. A vfio_device is not
> > supposed to be tightly linked to KVM - the only exception possibly
> > being s390..
> 
> So i915/gvt uses it for:
> 
>  - poking into the KVM GFN translations
>  - using the KVM page track notifier
> 
> No idea how these could be solved in a more generic way.

TBH I'm not sure how any of this works fully correctly..

I see this code getting something it calls a GFN and then passing
them to vfio - which makes no sense. Either a value is a GFN - the
physical memory address of the VM, or it is an IOVA. VFIO only takes
in IOVA and kvm only takes in GFN. So these are probably IOVAs really..

But then, I see this code taking GFNs (which are probably IOVAs?) and
passing them to the kvm page track notifier? That can't be right, VFIO
needs to translate the IOVA to a GFN, not assume 1:1...

It seems the purpose is to shadow a page table, and it is capturing
user space CPU writes to this page table memory I guess?

GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
some guest page table?

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 17:37               ` Jason Gunthorpe
  (?)
@ 2022-04-13 19:17                 ` Wang, Zhi A
  -1 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 19:17 UTC (permalink / raw)
  To: Jason Gunthorpe, Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Tian, Kevin, Liu, Yi L

On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>> Yeah, I was thinking about that too, but on the other hand I think it
>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>> supposed to be tightly linked to KVM - the only exception possibly
>>> being s390..
>>
>> So i915/gvt uses it for:
>>
>>  - poking into the KVM GFN translations
>>  - using the KVM page track notifier
>>
>> No idea how these could be solved in a more generic way.
> 
> TBH I'm not sure how any of this works fully correctly..
> 
> I see this code getting something it calls a GFN and then passing
> them to vfio - which makes no sense. Either a value is a GFN - the
> physical memory address of the VM, or it is an IOVA. VFIO only takes
> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> 
Can you let me know the place? So that I can take a look.

> But then, I see this code taking GFNs (which are probably IOVAs?) and
> passing them to the kvm page track notifier? That can't be right, VFIO
> needs to translate the IOVA to a GFN, not assume 1:1...
> 
GFNs are from the guest page table. It takes the GFN from a entry belongs
to a guest page table and request the kvm_page_track to track it, so that
the shadow page table can be updated accordingly.

> It seems the purpose is to shadow a page table, and it is capturing
> user space CPU writes to this page table memory I guess?
> 
Yes.The shadow page will be built according to the guest GPU page table.
When a guest workload is executed in the GPU, the root pointer of the
shadow page table in the shadow GPU context is used. If the host enables
the IOMMU, the pages used by the shadow page table needs to be mapped as
IOVA, and the PFNs in the shadow entries are IOVAs.

> GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
> some guest page table?
> 
Yes. It's to extract the PFNs from a page table entry.

> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 19:17                 ` Wang, Zhi A
  0 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 19:17 UTC (permalink / raw)
  To: Jason Gunthorpe, Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>> Yeah, I was thinking about that too, but on the other hand I think it
>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>> supposed to be tightly linked to KVM - the only exception possibly
>>> being s390..
>>
>> So i915/gvt uses it for:
>>
>>  - poking into the KVM GFN translations
>>  - using the KVM page track notifier
>>
>> No idea how these could be solved in a more generic way.
> 
> TBH I'm not sure how any of this works fully correctly..
> 
> I see this code getting something it calls a GFN and then passing
> them to vfio - which makes no sense. Either a value is a GFN - the
> physical memory address of the VM, or it is an IOVA. VFIO only takes
> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> 
Can you let me know the place? So that I can take a look.

> But then, I see this code taking GFNs (which are probably IOVAs?) and
> passing them to the kvm page track notifier? That can't be right, VFIO
> needs to translate the IOVA to a GFN, not assume 1:1...
> 
GFNs are from the guest page table. It takes the GFN from a entry belongs
to a guest page table and request the kvm_page_track to track it, so that
the shadow page table can be updated accordingly.

> It seems the purpose is to shadow a page table, and it is capturing
> user space CPU writes to this page table memory I guess?
> 
Yes.The shadow page will be built according to the guest GPU page table.
When a guest workload is executed in the GPU, the root pointer of the
shadow page table in the shadow GPU context is used. If the host enables
the IOMMU, the pages used by the shadow page table needs to be mapped as
IOVA, and the PFNs in the shadow entries are IOVAs.

> GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
> some guest page table?
> 
Yes. It's to extract the PFNs from a page table entry.

> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 19:17                 ` Wang, Zhi A
  0 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 19:17 UTC (permalink / raw)
  To: Jason Gunthorpe, Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger, Vivi,
	Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>> Yeah, I was thinking about that too, but on the other hand I think it
>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>> supposed to be tightly linked to KVM - the only exception possibly
>>> being s390..
>>
>> So i915/gvt uses it for:
>>
>>  - poking into the KVM GFN translations
>>  - using the KVM page track notifier
>>
>> No idea how these could be solved in a more generic way.
> 
> TBH I'm not sure how any of this works fully correctly..
> 
> I see this code getting something it calls a GFN and then passing
> them to vfio - which makes no sense. Either a value is a GFN - the
> physical memory address of the VM, or it is an IOVA. VFIO only takes
> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> 
Can you let me know the place? So that I can take a look.

> But then, I see this code taking GFNs (which are probably IOVAs?) and
> passing them to the kvm page track notifier? That can't be right, VFIO
> needs to translate the IOVA to a GFN, not assume 1:1...
> 
GFNs are from the guest page table. It takes the GFN from a entry belongs
to a guest page table and request the kvm_page_track to track it, so that
the shadow page table can be updated accordingly.

> It seems the purpose is to shadow a page table, and it is capturing
> user space CPU writes to this page table memory I guess?
> 
Yes.The shadow page will be built according to the guest GPU page table.
When a guest workload is executed in the GPU, the root pointer of the
shadow page table in the shadow GPU context is used. If the host enables
the IOMMU, the pages used by the shadow page table needs to be mapped as
IOVA, and the PFNs in the shadow entries are IOVAs.

> GFN's seems to come from gen8_gtt_get_pfn which seems to be parsing
> some guest page table?
> 
Yes. It's to extract the PFNs from a page table entry.

> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 19:17                 ` Wang, Zhi A
  (?)
@ 2022-04-13 20:04                   ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 20:04 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: Christoph Hellwig, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Vivi, Rodrigo, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Tian, Kevin, Liu,
	Yi L

On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>> Yeah, I was thinking about that too, but on the other hand I think it
> >>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>> supposed to be tightly linked to KVM - the only exception possibly
> >>> being s390..
> >>
> >> So i915/gvt uses it for:
> >>
> >>  - poking into the KVM GFN translations
> >>  - using the KVM page track notifier
> >>
> >> No idea how these could be solved in a more generic way.
> > 
> > TBH I'm not sure how any of this works fully correctly..
> > 
> > I see this code getting something it calls a GFN and then passing
> > them to vfio - which makes no sense. Either a value is a GFN - the
> > physical memory address of the VM, or it is an IOVA. VFIO only takes
> > in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > 
> Can you let me know the place? So that I can take a look.

Well, for instance:

static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
		unsigned long size, struct page **page)

There is no way that is a GFN, it is an IOVA.

> > It seems the purpose is to shadow a page table, and it is capturing
> > user space CPU writes to this page table memory I guess?
> > 
> Yes.The shadow page will be built according to the guest GPU page table.
> When a guest workload is executed in the GPU, the root pointer of the
> shadow page table in the shadow GPU context is used. If the host enables
> the IOMMU, the pages used by the shadow page table needs to be mapped as
> IOVA, and the PFNs in the shadow entries are IOVAs.

So if the page table in the guest has IOVA addreses then why can you
use them as GFNs?

Or is it that only the page table levels themselves are GFNs and the
actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
makes it inscrutible.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 20:04                   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 20:04 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>> Yeah, I was thinking about that too, but on the other hand I think it
> >>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>> supposed to be tightly linked to KVM - the only exception possibly
> >>> being s390..
> >>
> >> So i915/gvt uses it for:
> >>
> >>  - poking into the KVM GFN translations
> >>  - using the KVM page track notifier
> >>
> >> No idea how these could be solved in a more generic way.
> > 
> > TBH I'm not sure how any of this works fully correctly..
> > 
> > I see this code getting something it calls a GFN and then passing
> > them to vfio - which makes no sense. Either a value is a GFN - the
> > physical memory address of the VM, or it is an IOVA. VFIO only takes
> > in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > 
> Can you let me know the place? So that I can take a look.

Well, for instance:

static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
		unsigned long size, struct page **page)

There is no way that is a GFN, it is an IOVA.

> > It seems the purpose is to shadow a page table, and it is capturing
> > user space CPU writes to this page table memory I guess?
> > 
> Yes.The shadow page will be built according to the guest GPU page table.
> When a guest workload is executed in the GPU, the root pointer of the
> shadow page table in the shadow GPU context is used. If the host enables
> the IOMMU, the pages used by the shadow page table needs to be mapped as
> IOVA, and the PFNs in the shadow entries are IOVAs.

So if the page table in the guest has IOVA addreses then why can you
use them as GFNs?

Or is it that only the page table levels themselves are GFNs and the
actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
makes it inscrutible.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 20:04                   ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 20:04 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>> Yeah, I was thinking about that too, but on the other hand I think it
> >>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>> supposed to be tightly linked to KVM - the only exception possibly
> >>> being s390..
> >>
> >> So i915/gvt uses it for:
> >>
> >>  - poking into the KVM GFN translations
> >>  - using the KVM page track notifier
> >>
> >> No idea how these could be solved in a more generic way.
> > 
> > TBH I'm not sure how any of this works fully correctly..
> > 
> > I see this code getting something it calls a GFN and then passing
> > them to vfio - which makes no sense. Either a value is a GFN - the
> > physical memory address of the VM, or it is an IOVA. VFIO only takes
> > in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > 
> Can you let me know the place? So that I can take a look.

Well, for instance:

static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
		unsigned long size, struct page **page)

There is no way that is a GFN, it is an IOVA.

> > It seems the purpose is to shadow a page table, and it is capturing
> > user space CPU writes to this page table memory I guess?
> > 
> Yes.The shadow page will be built according to the guest GPU page table.
> When a guest workload is executed in the GPU, the root pointer of the
> shadow page table in the shadow GPU context is used. If the host enables
> the IOMMU, the pages used by the shadow page table needs to be mapped as
> IOVA, and the PFNs in the shadow entries are IOVAs.

So if the page table in the guest has IOVA addreses then why can you
use them as GFNs?

Or is it that only the page table levels themselves are GFNs and the
actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
makes it inscrutible.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 20:04                   ` Jason Gunthorpe
  (?)
@ 2022-04-13 21:08                     ` Wang, Zhi A
  -1 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 21:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Vasily Gorbik, intel-gfx, Harald Freudenberger, Vivi,  Rodrigo,
	intel-gvt-dev, Jason Herne, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
>> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
>>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>>>> Yeah, I was thinking about that too, but on the other hand I think it
>>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>>>> supposed to be tightly linked to KVM - the only exception possibly
>>>>> being s390..
>>>>
>>>> So i915/gvt uses it for:
>>>>
>>>>  - poking into the KVM GFN translations
>>>>  - using the KVM page track notifier
>>>>
>>>> No idea how these could be solved in a more generic way.
>>>
>>> TBH I'm not sure how any of this works fully correctly..
>>>
>>> I see this code getting something it calls a GFN and then passing
>>> them to vfio - which makes no sense. Either a value is a GFN - the
>>> physical memory address of the VM, or it is an IOVA. VFIO only takes
>>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
>>>
>> Can you let me know the place? So that I can take a look.
> 
> Well, for instance:
> 
> static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> 		unsigned long size, struct page **page)
> 
> There is no way that is a GFN, it is an IOVA.
> 
I see. The name is vague. There is an promised 1:1 mapping between guest GFN
and host IOVA when a PCI device is passed to a VM, I guess mdev is just
leveraging it as they are sharing the same code path in QEMU. It's in a
function called vfio_listener_region_add() in the source code of QEMU.
Are you planning to change the architecture? It would be nice to know your plan.

>>> It seems the purpose is to shadow a page table, and it is capturing
>>> user space CPU writes to this page table memory I guess?
>>>
>> Yes.The shadow page will be built according to the guest GPU page table.
>> When a guest workload is executed in the GPU, the root pointer of the
>> shadow page table in the shadow GPU context is used. If the host enables
>> the IOMMU, the pages used by the shadow page table needs to be mapped as
>> IOVA, and the PFNs in the shadow entries are IOVAs.
> 
> So if the page table in the guest has IOVA addreses then why can you
> use them as GFNs?
> 
That's another problem. We don't support a guess enabling the guest IOMMU
(aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
does the translation between guest IOVA and GFN. For a mdev model
implemented in the kernel, there isn't any mechanism so far to reach there.

People were discussing it before. But none agreement was achieved. Is it
possible to implement it in the kernel? Would like to discuss more about it
if there is any good idea.

> Or is it that only the page table levels themselves are GFNs and the
> actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> makes it inscrutible.
>

No. Even the HW is capable of controlling the level of translation, but
it's not used like this in the existing driver. It's definitely an
architecture open.
 
> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 21:08                     ` Wang, Zhi A
  0 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 21:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Joonas Lahtinen, Tian, Kevin,
	dri-devel, Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Jani Nikula, Alex Williamson, Harald Freudenberger, Zhenyu Wang,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle, Daniel Vetter

On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
>> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
>>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>>>> Yeah, I was thinking about that too, but on the other hand I think it
>>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>>>> supposed to be tightly linked to KVM - the only exception possibly
>>>>> being s390..
>>>>
>>>> So i915/gvt uses it for:
>>>>
>>>>  - poking into the KVM GFN translations
>>>>  - using the KVM page track notifier
>>>>
>>>> No idea how these could be solved in a more generic way.
>>>
>>> TBH I'm not sure how any of this works fully correctly..
>>>
>>> I see this code getting something it calls a GFN and then passing
>>> them to vfio - which makes no sense. Either a value is a GFN - the
>>> physical memory address of the VM, or it is an IOVA. VFIO only takes
>>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
>>>
>> Can you let me know the place? So that I can take a look.
> 
> Well, for instance:
> 
> static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> 		unsigned long size, struct page **page)
> 
> There is no way that is a GFN, it is an IOVA.
> 
I see. The name is vague. There is an promised 1:1 mapping between guest GFN
and host IOVA when a PCI device is passed to a VM, I guess mdev is just
leveraging it as they are sharing the same code path in QEMU. It's in a
function called vfio_listener_region_add() in the source code of QEMU.
Are you planning to change the architecture? It would be nice to know your plan.

>>> It seems the purpose is to shadow a page table, and it is capturing
>>> user space CPU writes to this page table memory I guess?
>>>
>> Yes.The shadow page will be built according to the guest GPU page table.
>> When a guest workload is executed in the GPU, the root pointer of the
>> shadow page table in the shadow GPU context is used. If the host enables
>> the IOMMU, the pages used by the shadow page table needs to be mapped as
>> IOVA, and the PFNs in the shadow entries are IOVAs.
> 
> So if the page table in the guest has IOVA addreses then why can you
> use them as GFNs?
> 
That's another problem. We don't support a guess enabling the guest IOMMU
(aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
does the translation between guest IOVA and GFN. For a mdev model
implemented in the kernel, there isn't any mechanism so far to reach there.

People were discussing it before. But none agreement was achieved. Is it
possible to implement it in the kernel? Would like to discuss more about it
if there is any good idea.

> Or is it that only the page table levels themselves are GFNs and the
> actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> makes it inscrutible.
>

No. Even the HW is capable of controlling the level of translation, but
it's not used like this in the existing driver. It's definitely an
architecture open.
 
> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 21:08                     ` Wang, Zhi A
  0 siblings, 0 replies; 141+ messages in thread
From: Wang, Zhi A @ 2022-04-13 21:08 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Tian, Kevin, Vasily Gorbik, intel-gfx, Alex Williamson,
	Harald Freudenberger, Vivi,  Rodrigo, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
>> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
>>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
>>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
>>>>> Yeah, I was thinking about that too, but on the other hand I think it
>>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
>>>>> supposed to be tightly linked to KVM - the only exception possibly
>>>>> being s390..
>>>>
>>>> So i915/gvt uses it for:
>>>>
>>>>  - poking into the KVM GFN translations
>>>>  - using the KVM page track notifier
>>>>
>>>> No idea how these could be solved in a more generic way.
>>>
>>> TBH I'm not sure how any of this works fully correctly..
>>>
>>> I see this code getting something it calls a GFN and then passing
>>> them to vfio - which makes no sense. Either a value is a GFN - the
>>> physical memory address of the VM, or it is an IOVA. VFIO only takes
>>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
>>>
>> Can you let me know the place? So that I can take a look.
> 
> Well, for instance:
> 
> static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> 		unsigned long size, struct page **page)
> 
> There is no way that is a GFN, it is an IOVA.
> 
I see. The name is vague. There is an promised 1:1 mapping between guest GFN
and host IOVA when a PCI device is passed to a VM, I guess mdev is just
leveraging it as they are sharing the same code path in QEMU. It's in a
function called vfio_listener_region_add() in the source code of QEMU.
Are you planning to change the architecture? It would be nice to know your plan.

>>> It seems the purpose is to shadow a page table, and it is capturing
>>> user space CPU writes to this page table memory I guess?
>>>
>> Yes.The shadow page will be built according to the guest GPU page table.
>> When a guest workload is executed in the GPU, the root pointer of the
>> shadow page table in the shadow GPU context is used. If the host enables
>> the IOMMU, the pages used by the shadow page table needs to be mapped as
>> IOVA, and the PFNs in the shadow entries are IOVAs.
> 
> So if the page table in the guest has IOVA addreses then why can you
> use them as GFNs?
> 
That's another problem. We don't support a guess enabling the guest IOMMU
(aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
does the translation between guest IOVA and GFN. For a mdev model
implemented in the kernel, there isn't any mechanism so far to reach there.

People were discussing it before. But none agreement was achieved. Is it
possible to implement it in the kernel? Would like to discuss more about it
if there is any good idea.

> Or is it that only the page table levels themselves are GFNs and the
> actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> makes it inscrutible.
>

No. Even the HW is capable of controlling the level of translation, but
it's not used like this in the existing driver. It's definitely an
architecture open.
 
> Jason
> 


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 21:08                     ` Wang, Zhi A
  (?)
@ 2022-04-13 23:12                       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 23:12 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: kvm, linux-doc, David Airlie, Joonas Lahtinen, Tian, Kevin,
	dri-devel, Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Jani Nikula, Alex Williamson, Harald Freudenberger, Zhenyu Wang,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle, Daniel Vetter

On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>>>> supposed to be tightly linked to KVM - the only exception possibly
> >>>>> being s390..
> >>>>
> >>>> So i915/gvt uses it for:
> >>>>
> >>>>  - poking into the KVM GFN translations
> >>>>  - using the KVM page track notifier
> >>>>
> >>>> No idea how these could be solved in a more generic way.
> >>>
> >>> TBH I'm not sure how any of this works fully correctly..
> >>>
> >>> I see this code getting something it calls a GFN and then passing
> >>> them to vfio - which makes no sense. Either a value is a GFN - the
> >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> >>>
> >> Can you let me know the place? So that I can take a look.
> > 
> > Well, for instance:
> > 
> > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > 		unsigned long size, struct page **page)
> > 
> > There is no way that is a GFN, it is an IOVA.
> > 
> I see. The name is vague. There is an promised 1:1 mapping between guest GFN
> and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> leveraging it as they are sharing the same code path in QEMU.

That has never been true. It happens to be the case in some common scenarios.

> > So if the page table in the guest has IOVA addreses then why can you
> > use them as GFNs?
> 
> That's another problem. We don't support a guess enabling the guest IOMMU
> (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
> does the translation between guest IOVA and GFN. For a mdev model
> implemented in the kernel, there isn't any mechanism so far to reach there.

And this is the uncommon scenario, there is no way for the mdev driver
to know if viommu is turned on, and AFAIK, no way to block it from VFIO.

> People were discussing it before. But none agreement was achieved. Is it
> possible to implement it in the kernel? Would like to discuss more about it
> if there is any good idea.

I don't know of anything, VFIO and kvm are not intended to be tightly
linked like this, they don't have the same view of the world.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 23:12                       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 23:12 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: Matthew Rosato, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Tian, Kevin, Vasily Gorbik, intel-gfx, Alex Williamson,
	Harald Freudenberger, Vivi, Rodrigo, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>>>> supposed to be tightly linked to KVM - the only exception possibly
> >>>>> being s390..
> >>>>
> >>>> So i915/gvt uses it for:
> >>>>
> >>>>  - poking into the KVM GFN translations
> >>>>  - using the KVM page track notifier
> >>>>
> >>>> No idea how these could be solved in a more generic way.
> >>>
> >>> TBH I'm not sure how any of this works fully correctly..
> >>>
> >>> I see this code getting something it calls a GFN and then passing
> >>> them to vfio - which makes no sense. Either a value is a GFN - the
> >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> >>>
> >> Can you let me know the place? So that I can take a look.
> > 
> > Well, for instance:
> > 
> > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > 		unsigned long size, struct page **page)
> > 
> > There is no way that is a GFN, it is an IOVA.
> > 
> I see. The name is vague. There is an promised 1:1 mapping between guest GFN
> and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> leveraging it as they are sharing the same code path in QEMU.

That has never been true. It happens to be the case in some common scenarios.

> > So if the page table in the guest has IOVA addreses then why can you
> > use them as GFNs?
> 
> That's another problem. We don't support a guess enabling the guest IOMMU
> (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
> does the translation between guest IOVA and GFN. For a mdev model
> implemented in the kernel, there isn't any mechanism so far to reach there.

And this is the uncommon scenario, there is no way for the mdev driver
to know if viommu is turned on, and AFAIK, no way to block it from VFIO.

> People were discussing it before. But none agreement was achieved. Is it
> possible to implement it in the kernel? Would like to discuss more about it
> if there is any good idea.

I don't know of anything, VFIO and kvm are not intended to be tightly
linked like this, they don't have the same view of the world.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-13 23:12                       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-13 23:12 UTC (permalink / raw)
  To: Wang, Zhi A
  Cc: Matthew Rosato, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Vasily Gorbik, intel-gfx, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Jason Herne, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> >>>>> supposed to be tightly linked to KVM - the only exception possibly
> >>>>> being s390..
> >>>>
> >>>> So i915/gvt uses it for:
> >>>>
> >>>>  - poking into the KVM GFN translations
> >>>>  - using the KVM page track notifier
> >>>>
> >>>> No idea how these could be solved in a more generic way.
> >>>
> >>> TBH I'm not sure how any of this works fully correctly..
> >>>
> >>> I see this code getting something it calls a GFN and then passing
> >>> them to vfio - which makes no sense. Either a value is a GFN - the
> >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> >>>
> >> Can you let me know the place? So that I can take a look.
> > 
> > Well, for instance:
> > 
> > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > 		unsigned long size, struct page **page)
> > 
> > There is no way that is a GFN, it is an IOVA.
> > 
> I see. The name is vague. There is an promised 1:1 mapping between guest GFN
> and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> leveraging it as they are sharing the same code path in QEMU.

That has never been true. It happens to be the case in some common scenarios.

> > So if the page table in the guest has IOVA addreses then why can you
> > use them as GFNs?
> 
> That's another problem. We don't support a guess enabling the guest IOMMU
> (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU, so
> does the translation between guest IOVA and GFN. For a mdev model
> implemented in the kernel, there isn't any mechanism so far to reach there.

And this is the uncommon scenario, there is no way for the mdev driver
to know if viommu is turned on, and AFAIK, no way to block it from VFIO.

> People were discussing it before. But none agreement was achieved. Is it
> possible to implement it in the kernel? Would like to discuss more about it
> if there is any good idea.

I don't know of anything, VFIO and kvm are not intended to be tightly
linked like this, they don't have the same view of the world.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 23:12                       ` Jason Gunthorpe
  (?)
@ 2022-04-14  2:04                         ` Tian, Kevin
  -1 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:04 UTC (permalink / raw)
  To: Jason Gunthorpe, Wang, Zhi A
  Cc: kvm, linux-doc, David Airlie, Joonas Lahtinen, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Jani Nikula, Alex Williamson, Harald Freudenberger, Zhenyu Wang,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle, Daniel Vetter

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 7:12 AM
> 
> On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> > On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> > >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> > >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> > >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> > >>>>> supposed to be tightly linked to KVM - the only exception possibly
> > >>>>> being s390..
> > >>>>
> > >>>> So i915/gvt uses it for:
> > >>>>
> > >>>>  - poking into the KVM GFN translations

The only user of this is is_2MB_gtt_possible() which I suppose should
go through vfio instead of kvm as it actually means IOVA here.

> > >>>>  - using the KVM page track notifier

This is the real reason which causes the mess as write-protecting
CPU access to certain guest memory has to go through KVM.

> > >>>>
> > >>>> No idea how these could be solved in a more generic way.
> > >>>
> > >>> TBH I'm not sure how any of this works fully correctly..
> > >>>
> > >>> I see this code getting something it calls a GFN and then passing
> > >>> them to vfio - which makes no sense. Either a value is a GFN - the
> > >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> > >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > >>>
> > >> Can you let me know the place? So that I can take a look.
> > >
> > > Well, for instance:
> > >
> > > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > > 		unsigned long size, struct page **page)
> > >
> > > There is no way that is a GFN, it is an IOVA.
> > >
> > I see. The name is vague. There is an promised 1:1 mapping between guest
> GFN
> > and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> > leveraging it as they are sharing the same code path in QEMU.
> 
> That has never been true. It happens to be the case in some common
> scenarios.
> 
> > > So if the page table in the guest has IOVA addreses then why can you
> > > use them as GFNs?
> >
> > That's another problem. We don't support a guess enabling the guest
> IOMMU
> > (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU,
> so
> > does the translation between guest IOVA and GFN. For a mdev model
> > implemented in the kernel, there isn't any mechanism so far to reach there.
> 
> And this is the uncommon scenario, there is no way for the mdev driver
> to know if viommu is turned on, and AFAIK, no way to block it from VFIO.
> 
> > People were discussing it before. But none agreement was achieved. Is it
> > possible to implement it in the kernel? Would like to discuss more about it
> > if there is any good idea.
> 
> I don't know of anything, VFIO and kvm are not intended to be tightly
> linked like this, they don't have the same view of the world.
> 

Yes this is the main problem. VFIO only cares about IOVA and KVM
only cares about GPA. GVT as a mdev driver should follow VFIO
in concept but due to the requirement of gpu page table shadowing
it needs call into KVM for write-protecting CPU access to GPA.

What about extending KVM page tracking interface to accept HVA?
This is probably the only common denominator between VFIO and
KVM to allow dissolve this conceptual disconnection...

Thanks
Kevin


^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14  2:04                         ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:04 UTC (permalink / raw)
  To: Jason Gunthorpe, Wang, Zhi A
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Eric Farman, Vasily Gorbik, intel-gfx, Alex Williamson,
	Harald Freudenberger, Vivi,  Rodrigo, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 7:12 AM
> 
> On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> > On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> > >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> > >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> > >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> > >>>>> supposed to be tightly linked to KVM - the only exception possibly
> > >>>>> being s390..
> > >>>>
> > >>>> So i915/gvt uses it for:
> > >>>>
> > >>>>  - poking into the KVM GFN translations

The only user of this is is_2MB_gtt_possible() which I suppose should
go through vfio instead of kvm as it actually means IOVA here.

> > >>>>  - using the KVM page track notifier

This is the real reason which causes the mess as write-protecting
CPU access to certain guest memory has to go through KVM.

> > >>>>
> > >>>> No idea how these could be solved in a more generic way.
> > >>>
> > >>> TBH I'm not sure how any of this works fully correctly..
> > >>>
> > >>> I see this code getting something it calls a GFN and then passing
> > >>> them to vfio - which makes no sense. Either a value is a GFN - the
> > >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> > >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > >>>
> > >> Can you let me know the place? So that I can take a look.
> > >
> > > Well, for instance:
> > >
> > > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > > 		unsigned long size, struct page **page)
> > >
> > > There is no way that is a GFN, it is an IOVA.
> > >
> > I see. The name is vague. There is an promised 1:1 mapping between guest
> GFN
> > and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> > leveraging it as they are sharing the same code path in QEMU.
> 
> That has never been true. It happens to be the case in some common
> scenarios.
> 
> > > So if the page table in the guest has IOVA addreses then why can you
> > > use them as GFNs?
> >
> > That's another problem. We don't support a guess enabling the guest
> IOMMU
> > (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU,
> so
> > does the translation between guest IOVA and GFN. For a mdev model
> > implemented in the kernel, there isn't any mechanism so far to reach there.
> 
> And this is the uncommon scenario, there is no way for the mdev driver
> to know if viommu is turned on, and AFAIK, no way to block it from VFIO.
> 
> > People were discussing it before. But none agreement was achieved. Is it
> > possible to implement it in the kernel? Would like to discuss more about it
> > if there is any good idea.
> 
> I don't know of anything, VFIO and kvm are not intended to be tightly
> linked like this, they don't have the same view of the world.
> 

Yes this is the main problem. VFIO only cares about IOVA and KVM
only cares about GPA. GVT as a mdev driver should follow VFIO
in concept but due to the requirement of gpu page table shadowing
it needs call into KVM for write-protecting CPU access to GPA.

What about extending KVM page tracking interface to accept HVA?
This is probably the only common denominator between VFIO and
KVM to allow dissolve this conceptual disconnection...

Thanks
Kevin


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14  2:04                         ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:04 UTC (permalink / raw)
  To: Jason Gunthorpe, Wang, Zhi A
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Eric Farman, Vasily Gorbik, intel-gfx, Harald Freudenberger,
	Vivi,  Rodrigo, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 7:12 AM
> 
> On Wed, Apr 13, 2022 at 09:08:40PM +0000, Wang, Zhi A wrote:
> > On 4/13/22 8:04 PM, Jason Gunthorpe wrote:
> > > On Wed, Apr 13, 2022 at 07:17:52PM +0000, Wang, Zhi A wrote:
> > >> On 4/13/22 5:37 PM, Jason Gunthorpe wrote:
> > >>> On Wed, Apr 13, 2022 at 06:29:46PM +0200, Christoph Hellwig wrote:
> > >>>> On Wed, Apr 13, 2022 at 01:18:14PM -0300, Jason Gunthorpe wrote:
> > >>>>> Yeah, I was thinking about that too, but on the other hand I think it
> > >>>>> is completely wrong that gvt requires kvm at all. A vfio_device is not
> > >>>>> supposed to be tightly linked to KVM - the only exception possibly
> > >>>>> being s390..
> > >>>>
> > >>>> So i915/gvt uses it for:
> > >>>>
> > >>>>  - poking into the KVM GFN translations

The only user of this is is_2MB_gtt_possible() which I suppose should
go through vfio instead of kvm as it actually means IOVA here.

> > >>>>  - using the KVM page track notifier

This is the real reason which causes the mess as write-protecting
CPU access to certain guest memory has to go through KVM.

> > >>>>
> > >>>> No idea how these could be solved in a more generic way.
> > >>>
> > >>> TBH I'm not sure how any of this works fully correctly..
> > >>>
> > >>> I see this code getting something it calls a GFN and then passing
> > >>> them to vfio - which makes no sense. Either a value is a GFN - the
> > >>> physical memory address of the VM, or it is an IOVA. VFIO only takes
> > >>> in IOVA and kvm only takes in GFN. So these are probably IOVAs really..
> > >>>
> > >> Can you let me know the place? So that I can take a look.
> > >
> > > Well, for instance:
> > >
> > > static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> > > 		unsigned long size, struct page **page)
> > >
> > > There is no way that is a GFN, it is an IOVA.
> > >
> > I see. The name is vague. There is an promised 1:1 mapping between guest
> GFN
> > and host IOVA when a PCI device is passed to a VM, I guess mdev is just
> > leveraging it as they are sharing the same code path in QEMU.
> 
> That has never been true. It happens to be the case in some common
> scenarios.
> 
> > > So if the page table in the guest has IOVA addreses then why can you
> > > use them as GFNs?
> >
> > That's another problem. We don't support a guess enabling the guest
> IOMMU
> > (aka virtual IOMMU). The guest/virtual IOMMU is implemented in QEMU,
> so
> > does the translation between guest IOVA and GFN. For a mdev model
> > implemented in the kernel, there isn't any mechanism so far to reach there.
> 
> And this is the uncommon scenario, there is no way for the mdev driver
> to know if viommu is turned on, and AFAIK, no way to block it from VFIO.
> 
> > People were discussing it before. But none agreement was achieved. Is it
> > possible to implement it in the kernel? Would like to discuss more about it
> > if there is any good idea.
> 
> I don't know of anything, VFIO and kvm are not intended to be tightly
> linked like this, they don't have the same view of the world.
> 

Yes this is the main problem. VFIO only cares about IOVA and KVM
only cares about GPA. GVT as a mdev driver should follow VFIO
in concept but due to the requirement of gpu page table shadowing
it needs call into KVM for write-protecting CPU access to GPA.

What about extending KVM page tracking interface to accept HVA?
This is probably the only common denominator between VFIO and
KVM to allow dissolve this conceptual disconnection...

Thanks
Kevin


^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-13 21:08                     ` Wang, Zhi A
  (?)
@ 2022-04-14  2:15                       ` Tian, Kevin
  -1 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:15 UTC (permalink / raw)
  To: Wang, Zhi A, Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Joonas Lahtinen, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Jani Nikula, Alex Williamson, Harald Freudenberger, Zhenyu Wang,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle, Daniel Vetter

> From: Wang, Zhi A <zhi.a.wang@intel.com>
> Sent: Thursday, April 14, 2022 5:09 AM
> 
> > Or is it that only the page table levels themselves are GFNs and the
> > actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> > makes it inscrutible.
> >
> 
> No. Even the HW is capable of controlling the level of translation, but
> it's not used like this in the existing driver. It's definitely an
> architecture open.
> 

There is no open on this. Any guest memory that vGPU accesses must
be IOVA including page table levels. There is only one address space
per vRID.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14  2:15                       ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:15 UTC (permalink / raw)
  To: Wang, Zhi A, Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Eric Farman, Vasily Gorbik, intel-gfx, Alex Williamson,
	Harald Freudenberger, Vivi,  Rodrigo, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

> From: Wang, Zhi A <zhi.a.wang@intel.com>
> Sent: Thursday, April 14, 2022 5:09 AM
> 
> > Or is it that only the page table levels themselves are GFNs and the
> > actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> > makes it inscrutible.
> >
> 
> No. Even the HW is capable of controlling the level of translation, but
> it's not used like this in the existing driver. It's definitely an
> architecture open.
> 

There is no open on this. Any guest memory that vGPU accesses must
be IOVA including page table levels. There is only one address space
per vRID.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14  2:15                       ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-14  2:15 UTC (permalink / raw)
  To: Wang, Zhi A, Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Tony Krowiak,
	Eric Farman, Vasily Gorbik, intel-gfx, Harald Freudenberger,
	Vivi,  Rodrigo, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Wang, Zhi A <zhi.a.wang@intel.com>
> Sent: Thursday, April 14, 2022 5:09 AM
> 
> > Or is it that only the page table levels themselves are GFNs and the
> > actual DMA's are IOVA? The unclear mixing of GFN as IOVA in the code
> > makes it inscrutible.
> >
> 
> No. Even the HW is capable of controlling the level of translation, but
> it's not used like this in the existing driver. It's definitely an
> architecture open.
> 

There is no open on this. Any guest memory that vGPU accesses must
be IOVA including page table levels. There is only one address space
per vRID.

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-14 13:51     ` Matthew Rosato
  -1 siblings, 0 replies; 141+ messages in thread
From: Matthew Rosato @ 2022-04-14 13:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between
> open_device()/close_device():
> 
>    vfio_pin_pages()
>    vfio_unpin_pages()
>    vfio_dma_rw()
>    vfio_register_notifier()
>    vfio_unregister_notifier()
> 
> So eliminate the calls to vfio_group_add_container_user() and add a simple
> WARN_ON to detect mis-use by drivers.
> 

vfio_device_fops_release decrements dev->open_count immediately before 
calling dev->ops->close_device, which means we could enter close_device 
with a dev_count of 0.

Maybe vfio_device_fops_release should handle the same way as 
vfio_group_get_device_fd?

	if (device->open_count == 1 && device->ops->close_device)
		device->ops->close_device(device);
	device->open_count--;


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-14 13:51     ` Matthew Rosato
  0 siblings, 0 replies; 141+ messages in thread
From: Matthew Rosato @ 2022-04-14 13:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between
> open_device()/close_device():
> 
>    vfio_pin_pages()
>    vfio_unpin_pages()
>    vfio_dma_rw()
>    vfio_register_notifier()
>    vfio_unregister_notifier()
> 
> So eliminate the calls to vfio_group_add_container_user() and add a simple
> WARN_ON to detect mis-use by drivers.
> 

vfio_device_fops_release decrements dev->open_count immediately before 
calling dev->ops->close_device, which means we could enter close_device 
with a dev_count of 0.

Maybe vfio_device_fops_release should handle the same way as 
vfio_group_get_device_fd?

	if (device->open_count == 1 && device->ops->close_device)
		device->ops->close_device(device);
	device->open_count--;


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-14 13:51     ` Matthew Rosato
  0 siblings, 0 replies; 141+ messages in thread
From: Matthew Rosato @ 2022-04-14 13:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between
> open_device()/close_device():
> 
>    vfio_pin_pages()
>    vfio_unpin_pages()
>    vfio_dma_rw()
>    vfio_register_notifier()
>    vfio_unregister_notifier()
> 
> So eliminate the calls to vfio_group_add_container_user() and add a simple
> WARN_ON to detect mis-use by drivers.
> 

vfio_device_fops_release decrements dev->open_count immediately before 
calling dev->ops->close_device, which means we could enter close_device 
with a dev_count of 0.

Maybe vfio_device_fops_release should handle the same way as 
vfio_group_get_device_fd?

	if (device->open_count == 1 && device->ops->close_device)
		device->ops->close_device(device);
	device->open_count--;


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-14 13:51     ` Matthew Rosato
  (?)
@ 2022-04-14 14:22       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-14 14:22 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Zhi Wang,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between
> > open_device()/close_device():
> > 
> >    vfio_pin_pages()
> >    vfio_unpin_pages()
> >    vfio_dma_rw()
> >    vfio_register_notifier()
> >    vfio_unregister_notifier()
> > 
> > So eliminate the calls to vfio_group_add_container_user() and add a simple
> > WARN_ON to detect mis-use by drivers.
> > 
> 
> vfio_device_fops_release decrements dev->open_count immediately before
> calling dev->ops->close_device, which means we could enter close_device with
> a dev_count of 0.
> 
> Maybe vfio_device_fops_release should handle the same way as
> vfio_group_get_device_fd?
> 
> 	if (device->open_count == 1 && device->ops->close_device)
> 		device->ops->close_device(device);
> 	device->open_count--;

Yes, thanks alot! I have nothing to test these flows on...

It matches the ordering in the only other place to call close_device.

I folded this into the patch:

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 0f735f9f206002..29761f0cf0a227 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1551,8 +1551,9 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 
 	mutex_lock(&device->dev_set->lock);
 	vfio_assert_device_open(device);
-	if (!--device->open_count && device->ops->close_device)
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);

Jason

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-14 14:22       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-14 14:22 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between
> > open_device()/close_device():
> > 
> >    vfio_pin_pages()
> >    vfio_unpin_pages()
> >    vfio_dma_rw()
> >    vfio_register_notifier()
> >    vfio_unregister_notifier()
> > 
> > So eliminate the calls to vfio_group_add_container_user() and add a simple
> > WARN_ON to detect mis-use by drivers.
> > 
> 
> vfio_device_fops_release decrements dev->open_count immediately before
> calling dev->ops->close_device, which means we could enter close_device with
> a dev_count of 0.
> 
> Maybe vfio_device_fops_release should handle the same way as
> vfio_group_get_device_fd?
> 
> 	if (device->open_count == 1 && device->ops->close_device)
> 		device->ops->close_device(device);
> 	device->open_count--;

Yes, thanks alot! I have nothing to test these flows on...

It matches the ordering in the only other place to call close_device.

I folded this into the patch:

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 0f735f9f206002..29761f0cf0a227 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1551,8 +1551,9 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 
 	mutex_lock(&device->dev_set->lock);
 	vfio_assert_device_open(device);
-	if (!--device->open_count && device->ops->close_device)
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);

Jason

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-14 14:22       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-14 14:22 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian, Kevin, Liu, Yi L

On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between
> > open_device()/close_device():
> > 
> >    vfio_pin_pages()
> >    vfio_unpin_pages()
> >    vfio_dma_rw()
> >    vfio_register_notifier()
> >    vfio_unregister_notifier()
> > 
> > So eliminate the calls to vfio_group_add_container_user() and add a simple
> > WARN_ON to detect mis-use by drivers.
> > 
> 
> vfio_device_fops_release decrements dev->open_count immediately before
> calling dev->ops->close_device, which means we could enter close_device with
> a dev_count of 0.
> 
> Maybe vfio_device_fops_release should handle the same way as
> vfio_group_get_device_fd?
> 
> 	if (device->open_count == 1 && device->ops->close_device)
> 		device->ops->close_device(device);
> 	device->open_count--;

Yes, thanks alot! I have nothing to test these flows on...

It matches the ordering in the only other place to call close_device.

I folded this into the patch:

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 0f735f9f206002..29761f0cf0a227 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1551,8 +1551,9 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 
 	mutex_lock(&device->dev_set->lock);
 	vfio_assert_device_open(device);
-	if (!--device->open_count && device->ops->close_device)
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);

Jason

^ permalink raw reply related	[flat|nested] 141+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for Make the rest of the VFIO driver interface use vfio_device (rev2)
  2022-04-12 15:53 ` Jason Gunthorpe
                   ` (15 preceding siblings ...)
  (?)
@ 2022-04-14 15:21 ` Patchwork
  -1 siblings, 0 replies; 141+ messages in thread
From: Patchwork @ 2022-04-14 15:21 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device (rev2)
URL   : https://patchwork.freedesktop.org/series/102606/
State : failure

== Summary ==

Error: patch https://patchwork.freedesktop.org/api/1.0/series/102606/revisions/2/mbox/ not applied
Applying: vfio: Make vfio_(un)register_notifier accept a vfio_device
Applying: vfio/ccw: Remove mdev from struct channel_program
Applying: vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
Applying: drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
Applying: vfio: Pass in a struct vfio_device * to vfio_dma_rw()
Applying: drm/i915/gvt: Add missing module_put() in error unwind
Applying: drm/i915/gvt: Delete kvmgt_vdev::vfio_group
Applying: vfio: Remove dead code
Applying: vfio: Remove calls to vfio_group_add_container_user()
error: sha1 information is lacking or useless (drivers/vfio/vfio.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0009 vfio: Remove calls to vfio_group_add_container_user()
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".



^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-14 19:25     ` Eric Farman
  -1 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device
> pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>  drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>  drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>  drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>  drivers/vfio/vfio.c               | 25 +++++++------------------
>  include/linux/mdev.h              |  1 +
>  include/linux/vfio.h              |  4 ++--
>  7 files changed, 41 insertions(+), 37 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct
> notifier_block *nb,
>  
>  static int intel_vgpu_open_device(struct mdev_device *mdev)
>  {
> +	struct vfio_device *vfio_dev =
> mdev_legacy_get_vfio_device(mdev);
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>  
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
> &events,
>  				&vdev->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed:
> %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	}
>  
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
> &events,
>  				&vdev->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed:
> %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->vfio_group = NULL;
>  
>  undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  
>  undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  out:
>  	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>  	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>  	int ret;
>  
>  	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  
>  	intel_gvt_ops->vgpu_release(vgpu);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for iommu failed: %d\n",
> ret);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for group failed: %d\n",
> ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c
> b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				     &events, &private->nb);
>  	if (ret)
>  		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  out_unregister:
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  	return ret;
>  }
>  
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct
> vfio_device *vdev)
>  
>  	cp_free(&private->cp);
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  }
>  
>  static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private
> *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct
> vfio_device *vdev)
>  	matrix_mdev->group_notifier.notifier_call =
> vfio_ap_mdev_group_notifier;
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev-
> >group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>  	if (ret)
>  		return ret;
>  
>  	matrix_mdev->iommu_notifier.notifier_call =
> vfio_ap_mdev_iommu_notifier;
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev-
> >iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>  	if (ret)
>  		goto out_unregister_group;
>  	return 0;
>  
>  out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	return ret;
>  }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct
> vfio_device *vdev)
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
>  
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	vfio_ap_mdev_unset_kvm(matrix_mdev);
>  }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c
> b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>  
>  #include "mdev_private.h"
>  
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be
> added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>  static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>  {
>  	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int
> vfio_unregister_group_notifier(struct vfio_group *group,
>  	return ret;
>  }
>  
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type
> type,
> +int vfio_register_notifier(struct vfio_device *dev, enum
> vfio_notify_type type,
>  			   unsigned long *events, struct notifier_block
> *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_register_notifier);
>  
> -int vfio_unregister_notifier(struct device *dev, enum
> vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>  			     struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb)
> +	if (!nb)
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device
> *to_mdev_device(struct device *dev)
>  unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>  unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>  struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev);
>  
>  /**
>   * struct mdev_parent_ops - Structure to be registered for each
> parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>  /* events for VFIO_GROUP_NOTIFY */
>  #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>  
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>  				  enum vfio_notify_type type,
>  				  unsigned long *required_events,
>  				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>  				    enum vfio_notify_type type,
>  				    struct notifier_block *nb);
>  


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14 19:25     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device
> pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>  drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>  drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>  drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>  drivers/vfio/vfio.c               | 25 +++++++------------------
>  include/linux/mdev.h              |  1 +
>  include/linux/vfio.h              |  4 ++--
>  7 files changed, 41 insertions(+), 37 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct
> notifier_block *nb,
>  
>  static int intel_vgpu_open_device(struct mdev_device *mdev)
>  {
> +	struct vfio_device *vfio_dev =
> mdev_legacy_get_vfio_device(mdev);
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>  
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
> &events,
>  				&vdev->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed:
> %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	}
>  
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
> &events,
>  				&vdev->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed:
> %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->vfio_group = NULL;
>  
>  undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  
>  undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  out:
>  	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>  	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>  	int ret;
>  
>  	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  
>  	intel_gvt_ops->vgpu_release(vgpu);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for iommu failed: %d\n",
> ret);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for group failed: %d\n",
> ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c
> b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				     &events, &private->nb);
>  	if (ret)
>  		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  out_unregister:
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  	return ret;
>  }
>  
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct
> vfio_device *vdev)
>  
>  	cp_free(&private->cp);
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  }
>  
>  static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private
> *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct
> vfio_device *vdev)
>  	matrix_mdev->group_notifier.notifier_call =
> vfio_ap_mdev_group_notifier;
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev-
> >group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>  	if (ret)
>  		return ret;
>  
>  	matrix_mdev->iommu_notifier.notifier_call =
> vfio_ap_mdev_iommu_notifier;
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev-
> >iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>  	if (ret)
>  		goto out_unregister_group;
>  	return 0;
>  
>  out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	return ret;
>  }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct
> vfio_device *vdev)
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
>  
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	vfio_ap_mdev_unset_kvm(matrix_mdev);
>  }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c
> b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>  
>  #include "mdev_private.h"
>  
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be
> added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>  static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>  {
>  	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int
> vfio_unregister_group_notifier(struct vfio_group *group,
>  	return ret;
>  }
>  
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type
> type,
> +int vfio_register_notifier(struct vfio_device *dev, enum
> vfio_notify_type type,
>  			   unsigned long *events, struct notifier_block
> *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_register_notifier);
>  
> -int vfio_unregister_notifier(struct device *dev, enum
> vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>  			     struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb)
> +	if (!nb)
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device
> *to_mdev_device(struct device *dev)
>  unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>  unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>  struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev);
>  
>  /**
>   * struct mdev_parent_ops - Structure to be registered for each
> parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>  /* events for VFIO_GROUP_NOTIFY */
>  #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>  
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>  				  enum vfio_notify_type type,
>  				  unsigned long *required_events,
>  				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>  				    enum vfio_notify_type type,
>  				    struct notifier_block *nb);
>  


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-14 19:25     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device
> pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>  drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>  drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>  drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>  drivers/vfio/vfio.c               | 25 +++++++------------------
>  include/linux/mdev.h              |  1 +
>  include/linux/vfio.h              |  4 ++--
>  7 files changed, 41 insertions(+), 37 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct
> notifier_block *nb,
>  
>  static int intel_vgpu_open_device(struct mdev_device *mdev)
>  {
> +	struct vfio_device *vfio_dev =
> mdev_legacy_get_vfio_device(mdev);
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>  
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
> &events,
>  				&vdev->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed:
> %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	}
>  
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
> &events,
>  				&vdev->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed:
> %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct
> mdev_device *mdev)
>  	vdev->vfio_group = NULL;
>  
>  undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  
>  undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  out:
>  	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>  	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>  	int ret;
>  
>  	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct
> intel_vgpu *vgpu)
>  
>  	intel_gvt_ops->vgpu_release(vgpu);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>  					&vdev->iommu_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for iommu failed: %d\n",
> ret);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev),
> VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  					&vdev->group_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for group failed: %d\n",
> ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c
> b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				     &events, &private->nb);
>  	if (ret)
>  		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
>  
>  out_unregister:
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  	return ret;
>  }
>  
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct
> vfio_device *vdev)
>  
>  	cp_free(&private->cp);
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
>  }
>  
>  static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private
> *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct
> vfio_device *vdev)
>  	matrix_mdev->group_notifier.notifier_call =
> vfio_ap_mdev_group_notifier;
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
>  
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev-
> >group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>  	if (ret)
>  		return ret;
>  
>  	matrix_mdev->iommu_notifier.notifier_call =
> vfio_ap_mdev_iommu_notifier;
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev-
> >iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>  	if (ret)
>  		goto out_unregister_group;
>  	return 0;
>  
>  out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	return ret;
>  }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct
> vfio_device *vdev)
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
>  
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	vfio_ap_mdev_unset_kvm(matrix_mdev);
>  }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c
> b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>  
>  #include "mdev_private.h"
>  
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be
> added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>  static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>  {
>  	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int
> vfio_unregister_group_notifier(struct vfio_group *group,
>  	return ret;
>  }
>  
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type
> type,
> +int vfio_register_notifier(struct vfio_device *dev, enum
> vfio_notify_type type,
>  			   unsigned long *events, struct notifier_block
> *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_register_notifier);
>  
> -int vfio_unregister_notifier(struct device *dev, enum
> vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>  			     struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
>  
> -	if (!dev || !nb)
> +	if (!nb)
>  		return -EINVAL;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device
> *dev, enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device
> *to_mdev_device(struct device *dev)
>  unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>  unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>  struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device
> *mdev);
>  
>  /**
>   * struct mdev_parent_ops - Structure to be registered for each
> parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>  /* events for VFIO_GROUP_NOTIFY */
>  #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>  
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>  				  enum vfio_notify_type type,
>  				  unsigned long *required_events,
>  				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>  				    enum vfio_notify_type type,
>  				    struct notifier_block *nb);
>  


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-14 19:25     ` Eric Farman
  -1 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> The next patch wants the vfio_device instead. There is no reason to
> store
> a pointer here since we can container_of back to the vfio_device.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++----------
> ----
>  drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
>  drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
>  3 files changed, 28 insertions(+), 23 deletions(-)

There's opportunity for simplification here, but I'll handle that when
I get to some other work in this space. For this series, this is fine.

Reviewed-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index 8d1b2771c1aa02..af5048a1ba8894 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -16,6 +16,7 @@
>  #include <asm/idals.h>
>  
>  #include "vfio_ccw_cp.h"
> +#include "vfio_ccw_private.h"
>  
>  struct pfn_array {
>  	/* Starting guest physical I/O address. */
> @@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa,
> u64 iova, unsigned int len)
>   * If the pin request partially succeeds, or fails completely,
>   * all pages are left unpinned and a negative error value is
> returned.
>   */
> -static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
> +static int pfn_array_pin(struct pfn_array *pa, struct vfio_device
> *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct device *mdev)
>  }
>  
>  /* Unpin the pages before releasing the memory. */
> -static void pfn_array_unpin_free(struct pfn_array *pa, struct device
> *mdev)
> +static void pfn_array_unpin_free(struct pfn_array *pa, struct
> vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> @@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1
> *source, unsigned long len)
>   * Within the domain (@mdev), copy @n bytes from a guest physical
>   * address (@iova) to a host physical address (@to).
>   */
> -static long copy_from_iova(struct device *mdev,
> +static long copy_from_iova(struct vfio_device *vdev,
>  			   void *to, u64 iova,
>  			   unsigned long n)
>  {
> @@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
>  	if (ret < 0)
>  		return ret;
>  
> -	ret = pfn_array_pin(&pa, mdev);
> +	ret = pfn_array_pin(&pa, vdev);
>  	if (ret < 0) {
> -		pfn_array_unpin_free(&pa, mdev);
> +		pfn_array_unpin_free(&pa, vdev);
>  		return ret;
>  	}
>  
> @@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
>  			break;
>  	}
>  
> -	pfn_array_unpin_free(&pa, mdev);
> +	pfn_array_unpin_free(&pa, vdev);
>  
>  	return l;
>  }
> @@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain
> *chain,
>  
>  static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain;
>  	int len, ret;
>  
>  	/* Copy 2K (the most we support today) of possible CCWs */
> -	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
> +	len = copy_from_iova(vdev, cp->guest_cp, cda,
>  			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
>  	if (len)
>  		return len;
> @@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  				 int idx,
>  				 struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccw1 *ccw;
>  	struct pfn_array *pa;
>  	u64 iova;
> @@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	if (ccw_is_idal(ccw)) {
>  		/* Read first IDAW to see if it's 4K-aligned or not. */
>  		/* All subsequent IDAws will be 4K-aligned. */
> -		ret = copy_from_iova(cp->mdev, &iova, ccw->cda,
> sizeof(iova));
> +		ret = copy_from_iova(vdev, &iova, ccw->cda,
> sizeof(iova));
>  		if (ret)
>  			return ret;
>  	} else {
> @@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  
>  	if (ccw_is_idal(ccw)) {
>  		/* Copy guest IDAL into host IDAL */
> -		ret = copy_from_iova(cp->mdev, idaws, ccw->cda,
> idal_len);
> +		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
>  		if (ret)
>  			goto out_unpin;
>  
> @@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	}
>  
>  	if (ccw_does_data_transfer(ccw)) {
> -		ret = pfn_array_pin(pa, cp->mdev);
> +		ret = pfn_array_pin(pa, vdev);
>  		if (ret < 0)
>  			goto out_unpin;
>  	} else {
> @@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	return 0;
>  
>  out_unpin:
> -	pfn_array_unpin_free(pa, cp->mdev);
> +	pfn_array_unpin_free(pa, vdev);
>  out_free_idaws:
>  	kfree(idaws);
>  out_init:
> @@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain
> *chain,
>   * Returns:
>   *   %0 on success and a negative error value on failure.
>   */
> -int cp_init(struct channel_program *cp, struct device *mdev, union
> orb *orb)
> +int cp_init(struct channel_program *cp, union orb *orb)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	/* custom ratelimit used to avoid flood during guest IPL */
>  	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
>  	int ret;
> @@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>  	 * the problem if something does break.
>  	 */
>  	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
> -		dev_warn(mdev, "Prefetching channel program even though
> prefetch not specified in ORB");
> +		dev_warn(vdev->dev, "Prefetching channel program even
> though prefetch not specified in ORB");
>  
>  	INIT_LIST_HEAD(&cp->ccwchain_list);
>  	memcpy(&cp->orb, orb, sizeof(*orb));
> -	cp->mdev = mdev;
>  
>  	/* Build a ccwchain for the first CCW segment */
>  	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
> @@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>   */
>  void cp_free(struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain, *temp;
>  	int i;
>  
> @@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
>  	cp->initialized = false;
>  	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next)
> {
>  		for (i = 0; i < chain->ch_len; i++) {
> -			pfn_array_unpin_free(chain->ch_pa + i, cp-
> >mdev);
> +			pfn_array_unpin_free(chain->ch_pa + i, vdev);
>  			ccwchain_cda_free(chain, i);
>  		}
>  		ccwchain_free(chain);
> diff --git a/drivers/s390/cio/vfio_ccw_cp.h
> b/drivers/s390/cio/vfio_ccw_cp.h
> index ba31240ce96594..e4c436199b4cda 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.h
> +++ b/drivers/s390/cio/vfio_ccw_cp.h
> @@ -37,13 +37,11 @@
>  struct channel_program {
>  	struct list_head ccwchain_list;
>  	union orb orb;
> -	struct device *mdev;
>  	bool initialized;
>  	struct ccw1 *guest_cp;
>  };
>  
> -extern int cp_init(struct channel_program *cp, struct device *mdev,
> -		   union orb *orb);
> +extern int cp_init(struct channel_program *cp, union orb *orb);
>  extern void cp_free(struct channel_program *cp);
>  extern int cp_prefetch(struct channel_program *cp);
>  extern union orb *cp_get_orb(struct channel_program *cp, u32
> intparm, u8 lpm);
> diff --git a/drivers/s390/cio/vfio_ccw_fsm.c
> b/drivers/s390/cio/vfio_ccw_fsm.c
> index e435a9cd92dacf..8483a266051c21 100644
> --- a/drivers/s390/cio/vfio_ccw_fsm.c
> +++ b/drivers/s390/cio/vfio_ccw_fsm.c
> @@ -262,8 +262,7 @@ static void fsm_io_request(struct
> vfio_ccw_private *private,
>  			errstr = "transport mode";
>  			goto err_out;
>  		}
> -		io_region->ret_code = cp_init(&private->cp,
> mdev_dev(mdev),
> -					      orb);
> +		io_region->ret_code = cp_init(&private->cp, orb);
>  		if (io_region->ret_code) {
>  			VFIO_CCW_MSG_EVENT(2,
>  					   "%pUl (%x.%x.%04x):
> cp_init=%d\n",


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-14 19:25     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> The next patch wants the vfio_device instead. There is no reason to
> store
> a pointer here since we can container_of back to the vfio_device.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++----------
> ----
>  drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
>  drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
>  3 files changed, 28 insertions(+), 23 deletions(-)

There's opportunity for simplification here, but I'll handle that when
I get to some other work in this space. For this series, this is fine.

Reviewed-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index 8d1b2771c1aa02..af5048a1ba8894 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -16,6 +16,7 @@
>  #include <asm/idals.h>
>  
>  #include "vfio_ccw_cp.h"
> +#include "vfio_ccw_private.h"
>  
>  struct pfn_array {
>  	/* Starting guest physical I/O address. */
> @@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa,
> u64 iova, unsigned int len)
>   * If the pin request partially succeeds, or fails completely,
>   * all pages are left unpinned and a negative error value is
> returned.
>   */
> -static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
> +static int pfn_array_pin(struct pfn_array *pa, struct vfio_device
> *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct device *mdev)
>  }
>  
>  /* Unpin the pages before releasing the memory. */
> -static void pfn_array_unpin_free(struct pfn_array *pa, struct device
> *mdev)
> +static void pfn_array_unpin_free(struct pfn_array *pa, struct
> vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> @@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1
> *source, unsigned long len)
>   * Within the domain (@mdev), copy @n bytes from a guest physical
>   * address (@iova) to a host physical address (@to).
>   */
> -static long copy_from_iova(struct device *mdev,
> +static long copy_from_iova(struct vfio_device *vdev,
>  			   void *to, u64 iova,
>  			   unsigned long n)
>  {
> @@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
>  	if (ret < 0)
>  		return ret;
>  
> -	ret = pfn_array_pin(&pa, mdev);
> +	ret = pfn_array_pin(&pa, vdev);
>  	if (ret < 0) {
> -		pfn_array_unpin_free(&pa, mdev);
> +		pfn_array_unpin_free(&pa, vdev);
>  		return ret;
>  	}
>  
> @@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
>  			break;
>  	}
>  
> -	pfn_array_unpin_free(&pa, mdev);
> +	pfn_array_unpin_free(&pa, vdev);
>  
>  	return l;
>  }
> @@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain
> *chain,
>  
>  static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain;
>  	int len, ret;
>  
>  	/* Copy 2K (the most we support today) of possible CCWs */
> -	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
> +	len = copy_from_iova(vdev, cp->guest_cp, cda,
>  			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
>  	if (len)
>  		return len;
> @@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  				 int idx,
>  				 struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccw1 *ccw;
>  	struct pfn_array *pa;
>  	u64 iova;
> @@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	if (ccw_is_idal(ccw)) {
>  		/* Read first IDAW to see if it's 4K-aligned or not. */
>  		/* All subsequent IDAws will be 4K-aligned. */
> -		ret = copy_from_iova(cp->mdev, &iova, ccw->cda,
> sizeof(iova));
> +		ret = copy_from_iova(vdev, &iova, ccw->cda,
> sizeof(iova));
>  		if (ret)
>  			return ret;
>  	} else {
> @@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  
>  	if (ccw_is_idal(ccw)) {
>  		/* Copy guest IDAL into host IDAL */
> -		ret = copy_from_iova(cp->mdev, idaws, ccw->cda,
> idal_len);
> +		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
>  		if (ret)
>  			goto out_unpin;
>  
> @@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	}
>  
>  	if (ccw_does_data_transfer(ccw)) {
> -		ret = pfn_array_pin(pa, cp->mdev);
> +		ret = pfn_array_pin(pa, vdev);
>  		if (ret < 0)
>  			goto out_unpin;
>  	} else {
> @@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	return 0;
>  
>  out_unpin:
> -	pfn_array_unpin_free(pa, cp->mdev);
> +	pfn_array_unpin_free(pa, vdev);
>  out_free_idaws:
>  	kfree(idaws);
>  out_init:
> @@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain
> *chain,
>   * Returns:
>   *   %0 on success and a negative error value on failure.
>   */
> -int cp_init(struct channel_program *cp, struct device *mdev, union
> orb *orb)
> +int cp_init(struct channel_program *cp, union orb *orb)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	/* custom ratelimit used to avoid flood during guest IPL */
>  	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
>  	int ret;
> @@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>  	 * the problem if something does break.
>  	 */
>  	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
> -		dev_warn(mdev, "Prefetching channel program even though
> prefetch not specified in ORB");
> +		dev_warn(vdev->dev, "Prefetching channel program even
> though prefetch not specified in ORB");
>  
>  	INIT_LIST_HEAD(&cp->ccwchain_list);
>  	memcpy(&cp->orb, orb, sizeof(*orb));
> -	cp->mdev = mdev;
>  
>  	/* Build a ccwchain for the first CCW segment */
>  	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
> @@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>   */
>  void cp_free(struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain, *temp;
>  	int i;
>  
> @@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
>  	cp->initialized = false;
>  	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next)
> {
>  		for (i = 0; i < chain->ch_len; i++) {
> -			pfn_array_unpin_free(chain->ch_pa + i, cp-
> >mdev);
> +			pfn_array_unpin_free(chain->ch_pa + i, vdev);
>  			ccwchain_cda_free(chain, i);
>  		}
>  		ccwchain_free(chain);
> diff --git a/drivers/s390/cio/vfio_ccw_cp.h
> b/drivers/s390/cio/vfio_ccw_cp.h
> index ba31240ce96594..e4c436199b4cda 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.h
> +++ b/drivers/s390/cio/vfio_ccw_cp.h
> @@ -37,13 +37,11 @@
>  struct channel_program {
>  	struct list_head ccwchain_list;
>  	union orb orb;
> -	struct device *mdev;
>  	bool initialized;
>  	struct ccw1 *guest_cp;
>  };
>  
> -extern int cp_init(struct channel_program *cp, struct device *mdev,
> -		   union orb *orb);
> +extern int cp_init(struct channel_program *cp, union orb *orb);
>  extern void cp_free(struct channel_program *cp);
>  extern int cp_prefetch(struct channel_program *cp);
>  extern union orb *cp_get_orb(struct channel_program *cp, u32
> intparm, u8 lpm);
> diff --git a/drivers/s390/cio/vfio_ccw_fsm.c
> b/drivers/s390/cio/vfio_ccw_fsm.c
> index e435a9cd92dacf..8483a266051c21 100644
> --- a/drivers/s390/cio/vfio_ccw_fsm.c
> +++ b/drivers/s390/cio/vfio_ccw_fsm.c
> @@ -262,8 +262,7 @@ static void fsm_io_request(struct
> vfio_ccw_private *private,
>  			errstr = "transport mode";
>  			goto err_out;
>  		}
> -		io_region->ret_code = cp_init(&private->cp,
> mdev_dev(mdev),
> -					      orb);
> +		io_region->ret_code = cp_init(&private->cp, orb);
>  		if (io_region->ret_code) {
>  			VFIO_CCW_MSG_EVENT(2,
>  					   "%pUl (%x.%x.%04x):
> cp_init=%d\n",


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-14 19:25     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> The next patch wants the vfio_device instead. There is no reason to
> store
> a pointer here since we can container_of back to the vfio_device.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  drivers/s390/cio/vfio_ccw_cp.c  | 44 +++++++++++++++++++----------
> ----
>  drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
>  drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
>  3 files changed, 28 insertions(+), 23 deletions(-)

There's opportunity for simplification here, but I'll handle that when
I get to some other work in this space. For this series, this is fine.

Reviewed-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index 8d1b2771c1aa02..af5048a1ba8894 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -16,6 +16,7 @@
>  #include <asm/idals.h>
>  
>  #include "vfio_ccw_cp.h"
> +#include "vfio_ccw_private.h"
>  
>  struct pfn_array {
>  	/* Starting guest physical I/O address. */
> @@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa,
> u64 iova, unsigned int len)
>   * If the pin request partially succeeds, or fails completely,
>   * all pages are left unpinned and a negative error value is
> returned.
>   */
> -static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
> +static int pfn_array_pin(struct pfn_array *pa, struct vfio_device
> *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct device *mdev)
>  }
>  
>  /* Unpin the pages before releasing the memory. */
> -static void pfn_array_unpin_free(struct pfn_array *pa, struct device
> *mdev)
> +static void pfn_array_unpin_free(struct pfn_array *pa, struct
> vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> @@ -190,7 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1
> *source, unsigned long len)
>   * Within the domain (@mdev), copy @n bytes from a guest physical
>   * address (@iova) to a host physical address (@to).
>   */
> -static long copy_from_iova(struct device *mdev,
> +static long copy_from_iova(struct vfio_device *vdev,
>  			   void *to, u64 iova,
>  			   unsigned long n)
>  {
> @@ -203,9 +204,9 @@ static long copy_from_iova(struct device *mdev,
>  	if (ret < 0)
>  		return ret;
>  
> -	ret = pfn_array_pin(&pa, mdev);
> +	ret = pfn_array_pin(&pa, vdev);
>  	if (ret < 0) {
> -		pfn_array_unpin_free(&pa, mdev);
> +		pfn_array_unpin_free(&pa, vdev);
>  		return ret;
>  	}
>  
> @@ -226,7 +227,7 @@ static long copy_from_iova(struct device *mdev,
>  			break;
>  	}
>  
> -	pfn_array_unpin_free(&pa, mdev);
> +	pfn_array_unpin_free(&pa, vdev);
>  
>  	return l;
>  }
> @@ -423,11 +424,13 @@ static int ccwchain_loop_tic(struct ccwchain
> *chain,
>  
>  static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain;
>  	int len, ret;
>  
>  	/* Copy 2K (the most we support today) of possible CCWs */
> -	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
> +	len = copy_from_iova(vdev, cp->guest_cp, cda,
>  			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
>  	if (len)
>  		return len;
> @@ -508,6 +511,8 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  				 int idx,
>  				 struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccw1 *ccw;
>  	struct pfn_array *pa;
>  	u64 iova;
> @@ -526,7 +531,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	if (ccw_is_idal(ccw)) {
>  		/* Read first IDAW to see if it's 4K-aligned or not. */
>  		/* All subsequent IDAws will be 4K-aligned. */
> -		ret = copy_from_iova(cp->mdev, &iova, ccw->cda,
> sizeof(iova));
> +		ret = copy_from_iova(vdev, &iova, ccw->cda,
> sizeof(iova));
>  		if (ret)
>  			return ret;
>  	} else {
> @@ -555,7 +560,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  
>  	if (ccw_is_idal(ccw)) {
>  		/* Copy guest IDAL into host IDAL */
> -		ret = copy_from_iova(cp->mdev, idaws, ccw->cda,
> idal_len);
> +		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
>  		if (ret)
>  			goto out_unpin;
>  
> @@ -574,7 +579,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	}
>  
>  	if (ccw_does_data_transfer(ccw)) {
> -		ret = pfn_array_pin(pa, cp->mdev);
> +		ret = pfn_array_pin(pa, vdev);
>  		if (ret < 0)
>  			goto out_unpin;
>  	} else {
> @@ -590,7 +595,7 @@ static int ccwchain_fetch_direct(struct ccwchain
> *chain,
>  	return 0;
>  
>  out_unpin:
> -	pfn_array_unpin_free(pa, cp->mdev);
> +	pfn_array_unpin_free(pa, vdev);
>  out_free_idaws:
>  	kfree(idaws);
>  out_init:
> @@ -632,8 +637,10 @@ static int ccwchain_fetch_one(struct ccwchain
> *chain,
>   * Returns:
>   *   %0 on success and a negative error value on failure.
>   */
> -int cp_init(struct channel_program *cp, struct device *mdev, union
> orb *orb)
> +int cp_init(struct channel_program *cp, union orb *orb)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	/* custom ratelimit used to avoid flood during guest IPL */
>  	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
>  	int ret;
> @@ -650,11 +657,10 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>  	 * the problem if something does break.
>  	 */
>  	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
> -		dev_warn(mdev, "Prefetching channel program even though
> prefetch not specified in ORB");
> +		dev_warn(vdev->dev, "Prefetching channel program even
> though prefetch not specified in ORB");
>  
>  	INIT_LIST_HEAD(&cp->ccwchain_list);
>  	memcpy(&cp->orb, orb, sizeof(*orb));
> -	cp->mdev = mdev;
>  
>  	/* Build a ccwchain for the first CCW segment */
>  	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
> @@ -682,6 +688,8 @@ int cp_init(struct channel_program *cp, struct
> device *mdev, union orb *orb)
>   */
>  void cp_free(struct channel_program *cp)
>  {
> +	struct vfio_device *vdev =
> +		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
>  	struct ccwchain *chain, *temp;
>  	int i;
>  
> @@ -691,7 +699,7 @@ void cp_free(struct channel_program *cp)
>  	cp->initialized = false;
>  	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next)
> {
>  		for (i = 0; i < chain->ch_len; i++) {
> -			pfn_array_unpin_free(chain->ch_pa + i, cp-
> >mdev);
> +			pfn_array_unpin_free(chain->ch_pa + i, vdev);
>  			ccwchain_cda_free(chain, i);
>  		}
>  		ccwchain_free(chain);
> diff --git a/drivers/s390/cio/vfio_ccw_cp.h
> b/drivers/s390/cio/vfio_ccw_cp.h
> index ba31240ce96594..e4c436199b4cda 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.h
> +++ b/drivers/s390/cio/vfio_ccw_cp.h
> @@ -37,13 +37,11 @@
>  struct channel_program {
>  	struct list_head ccwchain_list;
>  	union orb orb;
> -	struct device *mdev;
>  	bool initialized;
>  	struct ccw1 *guest_cp;
>  };
>  
> -extern int cp_init(struct channel_program *cp, struct device *mdev,
> -		   union orb *orb);
> +extern int cp_init(struct channel_program *cp, union orb *orb);
>  extern void cp_free(struct channel_program *cp);
>  extern int cp_prefetch(struct channel_program *cp);
>  extern union orb *cp_get_orb(struct channel_program *cp, u32
> intparm, u8 lpm);
> diff --git a/drivers/s390/cio/vfio_ccw_fsm.c
> b/drivers/s390/cio/vfio_ccw_fsm.c
> index e435a9cd92dacf..8483a266051c21 100644
> --- a/drivers/s390/cio/vfio_ccw_fsm.c
> +++ b/drivers/s390/cio/vfio_ccw_fsm.c
> @@ -262,8 +262,7 @@ static void fsm_io_request(struct
> vfio_ccw_private *private,
>  			errstr = "transport mode";
>  			goto err_out;
>  		}
> -		io_region->ret_code = cp_init(&private->cp,
> mdev_dev(mdev),
> -					      orb);
> +		io_region->ret_code = cp_init(&private->cp, orb);
>  		if (io_region->ret_code) {
>  			VFIO_CCW_MSG_EVENT(2,
>  					   "%pUl (%x.%x.%04x):
> cp_init=%d\n",


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-14 19:26     ` Eric Farman
  -1 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:26 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that
> instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra
> refcountings,
> and a confusing lifecycle model.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>  drivers/vfio/vfio.c                           | 40 ++++++-----------
> --
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 38 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn
> in a VFIO
>  driver::
>  
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
>  
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				    int npage);
>  
>  These functions call back into the back-end IOMMU module by using
> the pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index af5048a1ba8894..e362cb962a7234 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>  				 &q->saved_pfn, 1);
>  		q->saved_pfn = 0;
>  	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
>  
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new
> configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>  
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
>  
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int
> npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn, int npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
>  
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
>  
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
>  
>  	vfio_group_try_dissolve_container(group);
>  
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
>  
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
>  
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
>  
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
>  
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
>  
> -extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			  int npage, int prot, unsigned long
> *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
>  
>  extern int vfio_group_pin_pages(struct vfio_group *group,


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-14 19:26     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:26 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that
> instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra
> refcountings,
> and a confusing lifecycle model.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>  drivers/vfio/vfio.c                           | 40 ++++++-----------
> --
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 38 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn
> in a VFIO
>  driver::
>  
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
>  
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				    int npage);
>  
>  These functions call back into the back-end IOMMU module by using
> the pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index af5048a1ba8894..e362cb962a7234 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>  				 &q->saved_pfn, 1);
>  		q->saved_pfn = 0;
>  	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
>  
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new
> configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>  
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
>  
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int
> npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn, int npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
>  
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
>  
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
>  
>  	vfio_group_try_dissolve_container(group);
>  
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
>  
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
>  
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
>  
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
>  
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
>  
> -extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			  int npage, int prot, unsigned long
> *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
>  
>  extern int vfio_group_pin_pages(struct vfio_group *group,


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-14 19:26     ` Eric Farman
  0 siblings, 0 replies; 141+ messages in thread
From: Eric Farman @ 2022-04-14 19:26 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On Tue, 2022-04-12 at 12:53 -0300, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that
> instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra
> refcountings,
> and a confusing lifecycle model.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>  drivers/vfio/vfio.c                           | 40 ++++++-----------
> --
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 38 deletions(-)

For the -ccw bits:

Acked-by: Eric Farman <farman@linux.ibm.com>

> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn
> in a VFIO
>  driver::
>  
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
>  
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned
> long *user_pfn,
>  				    int npage);
>  
>  These functions call back into the back-end IOMMU module by using
> the pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c
> b/drivers/s390/cio/vfio_ccw_cp.c
> index af5048a1ba8894..e362cb962a7234 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
>  
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
>  
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa-
> >pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>  				 &q->saved_pfn, 1);
>  		q->saved_pfn = 0;
>  	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
>  
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new
> configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>  
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
>  
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int
> npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn, int npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
>  
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
>  
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
>  
>  	vfio_group_try_dissolve_container(group);
>  
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
>  
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
>  
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
>  
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
>  
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
>  
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
>  
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
>  
> -extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			  int npage, int prot, unsigned long
> *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
>  
>  extern int vfio_group_pin_pages(struct vfio_group *group,


^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-14 14:22       ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-15  2:32         ` Tian, Kevin
  -1 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15  2:32 UTC (permalink / raw)
  To: Jason Gunthorpe, Matthew Rosato
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Peter Oberparleiter, Halil Pasic, Vivi,
	Rodrigo, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Wang, Zhi A, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 10:22 PM
> 
> On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> > On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > > When the open_device() op is called the container_users is incremented
> and
> > > held incremented until close_device(). Thus, so long as drivers call
> > > functions within their open_device()/close_device() region they do not
> > > need to worry about the container_users.
> > >
> > > These functions can all only be called between
> > > open_device()/close_device():
> > >
> > >    vfio_pin_pages()
> > >    vfio_unpin_pages()
> > >    vfio_dma_rw()
> > >    vfio_register_notifier()
> > >    vfio_unregister_notifier()
> > >
> > > So eliminate the calls to vfio_group_add_container_user() and add a
> simple
> > > WARN_ON to detect mis-use by drivers.
> > >
> >
> > vfio_device_fops_release decrements dev->open_count immediately
> before
> > calling dev->ops->close_device, which means we could enter close_device
> with
> > a dev_count of 0.
> >
> > Maybe vfio_device_fops_release should handle the same way as
> > vfio_group_get_device_fd?
> >
> > 	if (device->open_count == 1 && device->ops->close_device)
> > 		device->ops->close_device(device);
> > 	device->open_count--;
> 
> Yes, thanks alot! I have nothing to test these flows on...
> 
> It matches the ordering in the only other place to call close_device.
> 
> I folded this into the patch:

While it's a welcomed fix is it actually related to this series? The point
of this patch is that those functions are called when container_users
is non-zero. This is true even without this fix given container_users
is decremented after calling device->ops->close_device().

iiuc this might be better sent out as a separate fix out of this series?
Or at least add a comment in the commit msg about taking chance
to fix an unrelated issue to not cause confusion...

Thanks
Kevin

^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15  2:32         ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15  2:32 UTC (permalink / raw)
  To: Jason Gunthorpe, Matthew Rosato
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Wang, Zhi A, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 10:22 PM
> 
> On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> > On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > > When the open_device() op is called the container_users is incremented
> and
> > > held incremented until close_device(). Thus, so long as drivers call
> > > functions within their open_device()/close_device() region they do not
> > > need to worry about the container_users.
> > >
> > > These functions can all only be called between
> > > open_device()/close_device():
> > >
> > >    vfio_pin_pages()
> > >    vfio_unpin_pages()
> > >    vfio_dma_rw()
> > >    vfio_register_notifier()
> > >    vfio_unregister_notifier()
> > >
> > > So eliminate the calls to vfio_group_add_container_user() and add a
> simple
> > > WARN_ON to detect mis-use by drivers.
> > >
> >
> > vfio_device_fops_release decrements dev->open_count immediately
> before
> > calling dev->ops->close_device, which means we could enter close_device
> with
> > a dev_count of 0.
> >
> > Maybe vfio_device_fops_release should handle the same way as
> > vfio_group_get_device_fd?
> >
> > 	if (device->open_count == 1 && device->ops->close_device)
> > 		device->ops->close_device(device);
> > 	device->open_count--;
> 
> Yes, thanks alot! I have nothing to test these flows on...
> 
> It matches the ordering in the only other place to call close_device.
> 
> I folded this into the patch:

While it's a welcomed fix is it actually related to this series? The point
of this patch is that those functions are called when container_users
is non-zero. This is true even without this fix given container_users
is decremented after calling device->ops->close_device().

iiuc this might be better sent out as a separate fix out of this series?
Or at least add a comment in the commit msg about taking chance
to fix an unrelated issue to not cause confusion...

Thanks
Kevin

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15  2:32         ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15  2:32 UTC (permalink / raw)
  To: Jason Gunthorpe, Matthew Rosato
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Jason Herne, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger, Vivi,
	Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 14, 2022 10:22 PM
> 
> On Thu, Apr 14, 2022 at 09:51:49AM -0400, Matthew Rosato wrote:
> > On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> > > When the open_device() op is called the container_users is incremented
> and
> > > held incremented until close_device(). Thus, so long as drivers call
> > > functions within their open_device()/close_device() region they do not
> > > need to worry about the container_users.
> > >
> > > These functions can all only be called between
> > > open_device()/close_device():
> > >
> > >    vfio_pin_pages()
> > >    vfio_unpin_pages()
> > >    vfio_dma_rw()
> > >    vfio_register_notifier()
> > >    vfio_unregister_notifier()
> > >
> > > So eliminate the calls to vfio_group_add_container_user() and add a
> simple
> > > WARN_ON to detect mis-use by drivers.
> > >
> >
> > vfio_device_fops_release decrements dev->open_count immediately
> before
> > calling dev->ops->close_device, which means we could enter close_device
> with
> > a dev_count of 0.
> >
> > Maybe vfio_device_fops_release should handle the same way as
> > vfio_group_get_device_fd?
> >
> > 	if (device->open_count == 1 && device->ops->close_device)
> > 		device->ops->close_device(device);
> > 	device->open_count--;
> 
> Yes, thanks alot! I have nothing to test these flows on...
> 
> It matches the ordering in the only other place to call close_device.
> 
> I folded this into the patch:

While it's a welcomed fix is it actually related to this series? The point
of this patch is that those functions are called when container_users
is non-zero. This is true even without this fix given container_users
is decremented after calling device->ops->close_device().

iiuc this might be better sent out as a separate fix out of this series?
Or at least add a comment in the commit msg about taking chance
to fix an unrelated issue to not cause confusion...

Thanks
Kevin

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-15  2:32         ` Tian, Kevin
  (?)
@ 2022-04-15 12:07           ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-15 12:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Matthew Rosato, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A, Christoph Hellwig,
	Liu, Yi L

On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:

> While it's a welcomed fix is it actually related to this series? The point
> of this patch is that those functions are called when container_users
> is non-zero. This is true even without this fix given container_users
> is decremented after calling device->ops->close_device().

It isn't, it is decremented before which causes it to be 0 when the
assertions are called.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15 12:07           ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-15 12:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Wang, Zhi A,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:

> While it's a welcomed fix is it actually related to this series? The point
> of this patch is that those functions are called when container_users
> is non-zero. This is true even without this fix given container_users
> is decremented after calling device->ops->close_device().

It isn't, it is decremented before which causes it to be 0 when the
assertions are called.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15 12:07           ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-15 12:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:

> While it's a welcomed fix is it actually related to this series? The point
> of this patch is that those functions are called when container_users
> is non-zero. This is true even without this fix given container_users
> is decremented after calling device->ops->close_device().

It isn't, it is decremented before which causes it to be 0 when the
assertions are called.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-15 12:07           ` Jason Gunthorpe
  (?)
@ 2022-04-15 23:45             ` Tian, Kevin
  -1 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15 23:45 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A, Christoph Hellwig,
	Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 15, 2022 8:07 PM
> 
> On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:
> 
> > While it's a welcomed fix is it actually related to this series? The point
> > of this patch is that those functions are called when container_users
> > is non-zero. This is true even without this fix given container_users
> > is decremented after calling device->ops->close_device().
> 
> It isn't, it is decremented before which causes it to be 0 when the
> assertions are called.
> 

right, it's quite obvious when I read it the second time. 

^ permalink raw reply	[flat|nested] 141+ messages in thread

* RE: [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15 23:45             ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15 23:45 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Wang, Zhi A,
	Jason Herne, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 15, 2022 8:07 PM
> 
> On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:
> 
> > While it's a welcomed fix is it actually related to this series? The point
> > of this patch is that those functions are called when container_users
> > is non-zero. This is true even without this fix given container_users
> > is decremented after calling device->ops->close_device().
> 
> It isn't, it is decremented before which causes it to be 0 when the
> assertions are called.
> 

right, it's quite obvious when I read it the second time. 

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-15 23:45             ` Tian, Kevin
  0 siblings, 0 replies; 141+ messages in thread
From: Tian, Kevin @ 2022-04-15 23:45 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Jason Herne,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 15, 2022 8:07 PM
> 
> On Fri, Apr 15, 2022 at 02:32:08AM +0000, Tian, Kevin wrote:
> 
> > While it's a welcomed fix is it actually related to this series? The point
> > of this patch is that those functions are called when container_users
> > is non-zero. This is true even without this fix given container_users
> > is decremented after calling device->ops->close_device().
> 
> It isn't, it is decremented before which causes it to be 0 when the
> assertions are called.
> 

right, it's quite obvious when I read it the second time. 

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-18 15:25     ` Jason J. Herne
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On 4/12/22 11:53, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);

Could be contracted to a single line. If you feel like it :)

>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}
>   

Looks good to me.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>

-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-18 15:25     ` Jason J. Herne
  0 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);

Could be contracted to a single line. If you feel like it :)

>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}
>   

Looks good to me.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>

-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-18 15:25     ` Jason J. Herne
  0 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);

Could be contracted to a single line. If you feel like it :)

>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}
>   

Looks good to me.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>

-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-18 15:28     ` Tony Krowiak
  -1 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:28 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
>
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>   drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>   drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>   drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>   drivers/vfio/vfio.c               | 25 +++++++------------------
>   include/linux/mdev.h              |  1 +
>   include/linux/vfio.h              |  4 ++--
>   7 files changed, 41 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
>   
>   static int intel_vgpu_open_device(struct mdev_device *mdev)
>   {
> +	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
>   	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>   
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
>   				&vdev->iommu_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	}
>   
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
>   				&vdev->group_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->vfio_group = NULL;
>   
>   undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   
>   undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   out:
>   	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>   	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>   	int ret;
>   
>   	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   
>   	intel_gvt_ops->vgpu_release(vgpu);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for group failed: %d\n", ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				     &events, &private->nb);
>   	if (ret)
>   		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   out_unregister:
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   	return ret;
>   }
>   
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
>   
>   	cp_free(&private->cp);
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   }
>   
>   static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>   
>   #include "mdev_private.h"
>   
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>   static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>   {
>   	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>   	return ret;
>   }
>   
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>   			   unsigned long *events, struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Is there a guarantee that dev != NULL? The original code below checks
the value of dev, so why is that check eliminated here?

>   	int ret;
>   
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_register_notifier);
>   
> -int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>   			     struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Same comment as above, not NULL check here.

>   	int ret;
>   
> -	if (!dev || !nb)
> +	if (!nb)
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
>   unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>   unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>   struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
>   
>   /**
>    * struct mdev_parent_ops - Structure to be registered for each parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>   /* events for VFIO_GROUP_NOTIFY */
>   #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>   
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>   				  enum vfio_notify_type type,
>   				  unsigned long *required_events,
>   				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>   				    enum vfio_notify_type type,
>   				    struct notifier_block *nb);
>   


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:28     ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:28 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
>
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>   drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>   drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>   drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>   drivers/vfio/vfio.c               | 25 +++++++------------------
>   include/linux/mdev.h              |  1 +
>   include/linux/vfio.h              |  4 ++--
>   7 files changed, 41 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
>   
>   static int intel_vgpu_open_device(struct mdev_device *mdev)
>   {
> +	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
>   	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>   
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
>   				&vdev->iommu_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	}
>   
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
>   				&vdev->group_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->vfio_group = NULL;
>   
>   undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   
>   undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   out:
>   	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>   	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>   	int ret;
>   
>   	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   
>   	intel_gvt_ops->vgpu_release(vgpu);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for group failed: %d\n", ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				     &events, &private->nb);
>   	if (ret)
>   		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   out_unregister:
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   	return ret;
>   }
>   
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
>   
>   	cp_free(&private->cp);
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   }
>   
>   static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>   
>   #include "mdev_private.h"
>   
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>   static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>   {
>   	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>   	return ret;
>   }
>   
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>   			   unsigned long *events, struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Is there a guarantee that dev != NULL? The original code below checks
the value of dev, so why is that check eliminated here?

>   	int ret;
>   
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_register_notifier);
>   
> -int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>   			     struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Same comment as above, not NULL check here.

>   	int ret;
>   
> -	if (!dev || !nb)
> +	if (!nb)
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
>   unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>   unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>   struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
>   
>   /**
>    * struct mdev_parent_ops - Structure to be registered for each parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>   /* events for VFIO_GROUP_NOTIFY */
>   #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>   
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>   				  enum vfio_notify_type type,
>   				  unsigned long *required_events,
>   				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>   				    enum vfio_notify_type type,
>   				    struct notifier_block *nb);
>   


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:28     ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:28 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
>
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   drivers/gpu/drm/i915/gvt/kvmgt.c  | 15 +++++++++------
>   drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>   drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>   drivers/vfio/mdev/vfio_mdev.c     | 12 ++++++++++++
>   drivers/vfio/vfio.c               | 25 +++++++------------------
>   include/linux/mdev.h              |  1 +
>   include/linux/vfio.h              |  4 ++--
>   7 files changed, 41 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 057ec449010458..bb59d21cf898ab 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -904,6 +904,7 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
>   
>   static int intel_vgpu_open_device(struct mdev_device *mdev)
>   {
> +	struct vfio_device *vfio_dev = mdev_legacy_get_vfio_device(mdev);
>   	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	unsigned long events;
> @@ -914,7 +915,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>   
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
>   				&vdev->iommu_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
> @@ -923,7 +924,7 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	}
>   
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
>   				&vdev->group_notifier);
>   	if (ret != 0) {
>   		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
> @@ -961,11 +962,11 @@ static int intel_vgpu_open_device(struct mdev_device *mdev)
>   	vdev->vfio_group = NULL;
>   
>   undo_register:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   
>   undo_iommu:
> -	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   out:
>   	return ret;
> @@ -988,6 +989,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>   	struct drm_i915_private *i915 = vgpu->gvt->gt->i915;
>   	struct kvmgt_guest_info *info;
> +	struct vfio_device *vfio_dev;
>   	int ret;
>   
>   	if (!handle_valid(vgpu->handle))
> @@ -998,12 +1000,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>   
>   	intel_gvt_ops->vgpu_release(vgpu);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
> +	vfio_dev = mdev_legacy_get_vfio_device(vdev->mdev);
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
>   					&vdev->iommu_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
>   
> -	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
> +	ret = vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>   					&vdev->group_notifier);
>   	drm_WARN(&i915->drm, ret,
>   		 "vfio_unregister_notifier for group failed: %d\n", ret);
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
> index d8589afac272f1..e1ce24d8fb2555 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   	private->nb.notifier_call = vfio_ccw_mdev_notifier;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				     &events, &private->nb);
>   	if (ret)
>   		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
>   
>   out_unregister:
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   	return ret;
>   }
>   
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
>   
>   	cp_free(&private->cp);
>   	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>   }
>   
>   static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
> diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
> index a90e24b0c851d3..91605c1e8c8f94 100644
> --- a/drivers/vfio/mdev/vfio_mdev.c
> +++ b/drivers/vfio/mdev/vfio_mdev.c
> @@ -17,6 +17,18 @@
>   
>   #include "mdev_private.h"
>   
> +/*
> + * Return the struct vfio_device for the mdev when using the legacy
> + * vfio_mdev_dev_ops path. No new callers to this function should be added.
> + */
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev)
> +{
> +	if (WARN_ON(mdev->dev.driver != &vfio_mdev_driver.driver))
> +		return NULL;
> +	return dev_get_drvdata(&mdev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mdev_legacy_get_vfio_device);
> +
>   static int vfio_mdev_open_device(struct vfio_device *core_vdev)
>   {
>   	struct mdev_device *mdev = to_mdev_device(core_vdev->dev);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>   	return ret;
>   }
>   
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>   			   unsigned long *events, struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Is there a guarantee that dev != NULL? The original code below checks
the value of dev, so why is that check eliminated here?

>   	int ret;
>   
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_register_notifier);
>   
> -int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>   			     struct notifier_block *nb)
>   {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;

Same comment as above, not NULL check here.

>   	int ret;
>   
> -	if (!dev || !nb)
> +	if (!nb)
>   		return -EINVAL;
>   
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>   	switch (type) {
>   	case VFIO_IOMMU_NOTIFY:
>   		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
>   	default:
>   		ret = -EINVAL;
>   	}
> -
> -	vfio_group_put(group);
>   	return ret;
>   }
>   EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/mdev.h b/include/linux/mdev.h
> index 15d03f6532d073..67d07220a28f29 100644
> --- a/include/linux/mdev.h
> +++ b/include/linux/mdev.h
> @@ -29,6 +29,7 @@ static inline struct mdev_device *to_mdev_device(struct device *dev)
>   unsigned int mdev_get_type_group_id(struct mdev_device *mdev);
>   unsigned int mtype_get_type_group_id(struct mdev_type *mtype);
>   struct device *mtype_get_parent_dev(struct mdev_type *mtype);
> +struct vfio_device *mdev_legacy_get_vfio_device(struct mdev_device *mdev);
>   
>   /**
>    * struct mdev_parent_ops - Structure to be registered for each parent device to
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>   /* events for VFIO_GROUP_NOTIFY */
>   #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
>   
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>   				  enum vfio_notify_type type,
>   				  unsigned long *required_events,
>   				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>   				    enum vfio_notify_type type,
>   				    struct notifier_block *nb);
>   


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-18 15:29     ` Jason J. Herne
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:29 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L

On 4/12/22 11:53, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
looks good for vfio_ap.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>


-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:29     ` Jason J. Herne
  0 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:29 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
looks good for vfio_ap.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>


-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:29     ` Jason J. Herne
  0 siblings, 0 replies; 141+ messages in thread
From: Jason J. Herne @ 2022-04-18 15:29 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie, Tony Krowiak,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig

On 4/12/22 11:53, Jason Gunthorpe wrote:
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ...
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 6e08d04b605d6e..69768061cd7bd9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
>   	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
>   	events = VFIO_GROUP_NOTIFY_SET_KVM;
>   
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>   	if (ret)
>   		return ret;
>   
>   	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
>   	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>   	if (ret)
>   		goto out_unregister_group;
>   	return 0;
>   
>   out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	return ret;
>   }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
>   	struct ap_matrix_mdev *matrix_mdev =
>   		container_of(vdev, struct ap_matrix_mdev, vdev);
>   
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>   				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>   				 &matrix_mdev->group_notifier);
>   	vfio_ap_mdev_unset_kvm(matrix_mdev);
>   }
looks good for vfio_ap.
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>


-- 
-- Jason J. Herne (jjherne@linux.ibm.com)

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-18 15:28     ` Tony Krowiak
  (?)
@ 2022-04-18 15:44       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-18 15:44 UTC (permalink / raw)
  To: Tony Krowiak
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
> > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> > index a4555014bd1e72..8a5c46aa2bef61 100644
> > +++ b/drivers/vfio/vfio.c
> > @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
> >   	return ret;
> >   }
> > -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> > +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
> >   			   unsigned long *events, struct notifier_block *nb)
> >   {
> > -	struct vfio_group *group;
> > +	struct vfio_group *group = dev->group;
> 
> Is there a guarantee that dev != NULL? The original code below checks
> the value of dev, so why is that check eliminated here?

Yes, no kernel driver calls this with null dev. The original code
should have been a WARN_ON as it is just protecting against a buggy
driver. In this case if the driver is buggy we simply generate a
backtrace through a null deref panic.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:44       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-18 15:44 UTC (permalink / raw)
  To: Tony Krowiak
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
> > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> > index a4555014bd1e72..8a5c46aa2bef61 100644
> > +++ b/drivers/vfio/vfio.c
> > @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
> >   	return ret;
> >   }
> > -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> > +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
> >   			   unsigned long *events, struct notifier_block *nb)
> >   {
> > -	struct vfio_group *group;
> > +	struct vfio_group *group = dev->group;
> 
> Is there a guarantee that dev != NULL? The original code below checks
> the value of dev, so why is that check eliminated here?

Yes, no kernel driver calls this with null dev. The original code
should have been a WARN_ON as it is just protecting against a buggy
driver. In this case if the driver is buggy we simply generate a
backtrace through a null deref panic.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:44       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-18 15:44 UTC (permalink / raw)
  To: Tony Krowiak
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L

On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
> > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> > index a4555014bd1e72..8a5c46aa2bef61 100644
> > +++ b/drivers/vfio/vfio.c
> > @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
> >   	return ret;
> >   }
> > -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> > +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
> >   			   unsigned long *events, struct notifier_block *nb)
> >   {
> > -	struct vfio_group *group;
> > +	struct vfio_group *group = dev->group;
> 
> Is there a guarantee that dev != NULL? The original code below checks
> the value of dev, so why is that check eliminated here?

Yes, no kernel driver calls this with null dev. The original code
should have been a WARN_ON as it is just protecting against a buggy
driver. In this case if the driver is buggy we simply generate a
backtrace through a null deref panic.

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-18 15:44       ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-18 15:52         ` Tony Krowiak
  -1 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian,
	Kevin, Liu, Yi L



On 4/18/22 11:44 AM, Jason Gunthorpe wrote:
> On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>>> index a4555014bd1e72..8a5c46aa2bef61 100644
>>> +++ b/drivers/vfio/vfio.c
>>> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>>>    	return ret;
>>>    }
>>> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>>> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>>>    			   unsigned long *events, struct notifier_block *nb)
>>>    {
>>> -	struct vfio_group *group;
>>> +	struct vfio_group *group = dev->group;
>> Is there a guarantee that dev != NULL? The original code below checks
>> the value of dev, so why is that check eliminated here?
> Yes, no kernel driver calls this with null dev. The original code
> should have been a WARN_ON as it is just protecting against a buggy
> driver. In this case if the driver is buggy we simply generate a
> backtrace through a null deref panic.
>
> Jason

Regarding the vfio_ap parts:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>



^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:52         ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle



On 4/18/22 11:44 AM, Jason Gunthorpe wrote:
> On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>>> index a4555014bd1e72..8a5c46aa2bef61 100644
>>> +++ b/drivers/vfio/vfio.c
>>> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>>>    	return ret;
>>>    }
>>> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>>> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>>>    			   unsigned long *events, struct notifier_block *nb)
>>>    {
>>> -	struct vfio_group *group;
>>> +	struct vfio_group *group = dev->group;
>> Is there a guarantee that dev != NULL? The original code below checks
>> the value of dev, so why is that check eliminated here?
> Yes, no kernel driver calls this with null dev. The original code
> should have been a WARN_ON as it is just protecting against a buggy
> driver. In this case if the driver is buggy we simply generate a
> backtrace through a null deref panic.
>
> Jason

Regarding the vfio_ap parts:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>



^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-18 15:52         ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle



On 4/18/22 11:44 AM, Jason Gunthorpe wrote:
> On Mon, Apr 18, 2022 at 11:28:30AM -0400, Tony Krowiak wrote:
>>> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
>>> index a4555014bd1e72..8a5c46aa2bef61 100644
>>> +++ b/drivers/vfio/vfio.c
>>> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
>>>    	return ret;
>>>    }
>>> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
>>> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
>>>    			   unsigned long *events, struct notifier_block *nb)
>>>    {
>>> -	struct vfio_group *group;
>>> +	struct vfio_group *group = dev->group;
>> Is there a guarantee that dev != NULL? The original code below checks
>> the value of dev, so why is that check eliminated here?
> Yes, no kernel driver calls this with null dev. The original code
> should have been a WARN_ON as it is just protecting against a buggy
> driver. In this case if the driver is buggy we simply generate a
> backtrace through a null deref panic.
>
> Jason

Regarding the vfio_ap parts:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>



^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-12 15:53   ` Jason Gunthorpe
  (?)
@ 2022-04-18 15:56     ` Tony Krowiak
  -1 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:56 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Christoph Hellwig, Tian, Kevin, Liu, Yi L



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   .../driver-api/vfio-mediated-device.rst       |  4 +-
>   drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>   drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>   drivers/vfio/vfio.c                           | 40 ++++++-------------
>   include/linux/vfio.h                          |  4 +-
>   5 files changed, 24 insertions(+), 38 deletions(-)
>
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>   The following APIs are provided for translating user pfn to host pfn in a VFIO
>   driver::
>   
> -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				  int npage, int prot, unsigned long *phys_pfn);
>   
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				    int npage);
>   
>   These functions call back into the back-end IOMMU module by using the pin_pages
>
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);
>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}

The vfio_ap snippet:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>

>   
>


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-18 15:56     ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:56 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Tian, Kevin, Liu, Yi L, Christoph Hellwig



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   .../driver-api/vfio-mediated-device.rst       |  4 +-
>   drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>   drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>   drivers/vfio/vfio.c                           | 40 ++++++-------------
>   include/linux/vfio.h                          |  4 +-
>   5 files changed, 24 insertions(+), 38 deletions(-)
>
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>   The following APIs are provided for translating user pfn to host pfn in a VFIO
>   driver::
>   
> -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				  int npage, int prot, unsigned long *phys_pfn);
>   
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				    int npage);
>   
>   These functions call back into the back-end IOMMU module by using the pin_pages
>
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);
>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}

The vfio_ap snippet:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>

>   
>


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-18 15:56     ` Tony Krowiak
  0 siblings, 0 replies; 141+ messages in thread
From: Tony Krowiak @ 2022-04-18 15:56 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Eric Farman,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang
  Cc: Liu, Yi L, Christoph Hellwig



On 4/12/22 11:53 AM, Jason Gunthorpe wrote:
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   .../driver-api/vfio-mediated-device.rst       |  4 +-
>   drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>   drivers/s390/crypto/vfio_ap_ops.c             |  8 ++--
>   drivers/vfio/vfio.c                           | 40 ++++++-------------
>   include/linux/vfio.h                          |  4 +-
>   5 files changed, 24 insertions(+), 38 deletions(-)
>
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
> index 9f26079cacae35..6aeca741dc9be1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -279,10 +279,10 @@ Translation APIs for Mediated Devices
>   The following APIs are provided for translating user pfn to host pfn in a VFIO
>   driver::
>   
> -	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				  int npage, int prot, unsigned long *phys_pfn);
>   
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +	extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>   				    int npage);
>   
>   These functions call back into the back-end IOMMU module by using the pin_pages
>
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> index 69768061cd7bd9..a10b3369d76c41 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
>   		q->saved_isc = VFIO_AP_ISC_INVALID;
>   	}
>   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> +		vfio_unpin_pages(&q->matrix_mdev->vdev,
>   				 &q->saved_pfn, 1);
>   		q->saved_pfn = 0;
>   	}
> @@ -258,7 +258,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		return status;
>   	}
>   
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>   			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>   	switch (ret) {
>   	case 1:
> @@ -301,7 +301,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
>   		break;
>   	case AP_RESPONSE_OTHERWISE_CHANGED:
>   		/* We could not modify IRQ setings: clear new configuration */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>   		kvm_s390_gisc_unregister(kvm, isc);
>   		break;
>   	default:
> @@ -1250,7 +1250,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
>   		struct vfio_iommu_type1_dma_unmap *unmap = data;
>   		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
>   
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>   		return NOTIFY_OK;
>   	}

The vfio_ap snippet:
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>

>   
>


^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-18 15:25     ` Jason J. Herne
  (?)
@ 2022-04-19 17:00       ` Jason Gunthorpe
  -1 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-19 17:00 UTC (permalink / raw)
  To: Jason J. Herne
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Mon, Apr 18, 2022 at 11:25:15AM -0400, Jason J. Herne wrote:
> On 4/12/22 11:53, Jason Gunthorpe wrote:
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> > ...
> > diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> > index 69768061cd7bd9..a10b3369d76c41 100644
> > +++ b/drivers/s390/crypto/vfio_ap_ops.c
> > @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
> >   		q->saved_isc = VFIO_AP_ISC_INVALID;
> >   	}
> >   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> > -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> > +		vfio_unpin_pages(&q->matrix_mdev->vdev,
> >   				 &q->saved_pfn, 1);
> 
> Could be contracted to a single line. If you feel like it :)

Done, thanks

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [Intel-gfx] [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-19 17:00       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-19 17:00 UTC (permalink / raw)
  To: Jason J. Herne
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Mon, Apr 18, 2022 at 11:25:15AM -0400, Jason J. Herne wrote:
> On 4/12/22 11:53, Jason Gunthorpe wrote:
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> > ...
> > diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> > index 69768061cd7bd9..a10b3369d76c41 100644
> > +++ b/drivers/s390/crypto/vfio_ap_ops.c
> > @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
> >   		q->saved_isc = VFIO_AP_ISC_INVALID;
> >   	}
> >   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> > -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> > +		vfio_unpin_pages(&q->matrix_mdev->vdev,
> >   				 &q->saved_pfn, 1);
> 
> Could be contracted to a single line. If you feel like it :)

Done, thanks

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

* Re: [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-19 17:00       ` Jason Gunthorpe
  0 siblings, 0 replies; 141+ messages in thread
From: Jason Gunthorpe @ 2022-04-19 17:00 UTC (permalink / raw)
  To: Jason J. Herne
  Cc: Alexander Gordeev, David Airlie, Tony Krowiak, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Eric Farman, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Christoph Hellwig, Tian, Kevin, Liu, Yi L

On Mon, Apr 18, 2022 at 11:25:15AM -0400, Jason J. Herne wrote:
> On 4/12/22 11:53, Jason Gunthorpe wrote:
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> > ...
> > diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
> > index 69768061cd7bd9..a10b3369d76c41 100644
> > +++ b/drivers/s390/crypto/vfio_ap_ops.c
> > @@ -124,7 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
> >   		q->saved_isc = VFIO_AP_ISC_INVALID;
> >   	}
> >   	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> > -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> > +		vfio_unpin_pages(&q->matrix_mdev->vdev,
> >   				 &q->saved_pfn, 1);
> 
> Could be contracted to a single line. If you feel like it :)

Done, thanks

Jason

^ permalink raw reply	[flat|nested] 141+ messages in thread

end of thread, other threads:[~2022-04-25 13:04 UTC | newest]

Thread overview: 141+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-12 15:53 [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device Jason Gunthorpe
2022-04-12 15:53 ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53 ` Jason Gunthorpe
2022-04-12 15:53 ` [PATCH 1/9] vfio: Make vfio_(un)register_notifier accept a vfio_device Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  5:55   ` Christoph Hellwig
2022-04-13  5:55     ` [Intel-gfx] " Christoph Hellwig
2022-04-13 11:39     ` Jason Gunthorpe
2022-04-13 11:39       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 11:39       ` Jason Gunthorpe
2022-04-13 16:06       ` Christoph Hellwig
2022-04-13 16:06         ` [Intel-gfx] " Christoph Hellwig
2022-04-13 16:18         ` Jason Gunthorpe
2022-04-13 16:18           ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 16:18           ` Jason Gunthorpe
2022-04-13 16:29           ` Christoph Hellwig
2022-04-13 16:29             ` [Intel-gfx] " Christoph Hellwig
2022-04-13 17:37             ` Jason Gunthorpe
2022-04-13 17:37               ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 17:37               ` Jason Gunthorpe
2022-04-13 19:17               ` Wang, Zhi A
2022-04-13 19:17                 ` [Intel-gfx] " Wang, Zhi A
2022-04-13 19:17                 ` Wang, Zhi A
2022-04-13 20:04                 ` Jason Gunthorpe
2022-04-13 20:04                   ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 20:04                   ` Jason Gunthorpe
2022-04-13 21:08                   ` [Intel-gfx] " Wang, Zhi A
2022-04-13 21:08                     ` Wang, Zhi A
2022-04-13 21:08                     ` Wang, Zhi A
2022-04-13 23:12                     ` Jason Gunthorpe
2022-04-13 23:12                       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 23:12                       ` Jason Gunthorpe
2022-04-14  2:04                       ` Tian, Kevin
2022-04-14  2:04                         ` [Intel-gfx] " Tian, Kevin
2022-04-14  2:04                         ` Tian, Kevin
2022-04-14  2:15                     ` Tian, Kevin
2022-04-14  2:15                       ` [Intel-gfx] " Tian, Kevin
2022-04-14  2:15                       ` Tian, Kevin
2022-04-14 19:25   ` Eric Farman
2022-04-14 19:25     ` [Intel-gfx] " Eric Farman
2022-04-14 19:25     ` Eric Farman
2022-04-18 15:28   ` Tony Krowiak
2022-04-18 15:28     ` [Intel-gfx] " Tony Krowiak
2022-04-18 15:28     ` Tony Krowiak
2022-04-18 15:44     ` Jason Gunthorpe
2022-04-18 15:44       ` Jason Gunthorpe
2022-04-18 15:44       ` [Intel-gfx] " Jason Gunthorpe
2022-04-18 15:52       ` Tony Krowiak
2022-04-18 15:52         ` [Intel-gfx] " Tony Krowiak
2022-04-18 15:52         ` Tony Krowiak
2022-04-18 15:29   ` Jason J. Herne
2022-04-18 15:29     ` [Intel-gfx] " Jason J. Herne
2022-04-18 15:29     ` Jason J. Herne
2022-04-12 15:53 ` [PATCH 2/9] vfio/ccw: Remove mdev from struct channel_program Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-14 19:25   ` Eric Farman
2022-04-14 19:25     ` [Intel-gfx] " Eric Farman
2022-04-14 19:25     ` Eric Farman
2022-04-12 15:53 ` [PATCH 3/9] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages() Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  5:57   ` Christoph Hellwig
2022-04-13  5:57     ` [Intel-gfx] " Christoph Hellwig
2022-04-13 11:40     ` Jason Gunthorpe
2022-04-13 11:40       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 11:40       ` Jason Gunthorpe
2022-04-14 19:26   ` Eric Farman
2022-04-14 19:26     ` [Intel-gfx] " Eric Farman
2022-04-14 19:26     ` Eric Farman
2022-04-18 15:25   ` Jason J. Herne
2022-04-18 15:25     ` [Intel-gfx] " Jason J. Herne
2022-04-18 15:25     ` Jason J. Herne
2022-04-19 17:00     ` Jason Gunthorpe
2022-04-19 17:00       ` Jason Gunthorpe
2022-04-19 17:00       ` [Intel-gfx] " Jason Gunthorpe
2022-04-18 15:56   ` Tony Krowiak
2022-04-18 15:56     ` [Intel-gfx] " Tony Krowiak
2022-04-18 15:56     ` Tony Krowiak
2022-04-12 15:53 ` [PATCH 4/9] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  6:01   ` Christoph Hellwig
2022-04-13  6:01     ` [Intel-gfx] " Christoph Hellwig
2022-04-13 13:39     ` Jason Gunthorpe
2022-04-13 13:39       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 13:39       ` Jason Gunthorpe
2022-04-12 15:53 ` [PATCH 5/9] vfio: Pass in a struct vfio_device * to vfio_dma_rw() Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  6:00   ` Christoph Hellwig
2022-04-13  6:00     ` [Intel-gfx] " Christoph Hellwig
2022-04-13 13:39     ` Jason Gunthorpe
2022-04-13 13:39       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 13:39       ` Jason Gunthorpe
2022-04-12 15:53 ` [PATCH 6/9] drm/i915/gvt: Add missing module_put() in error unwind Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  5:59   ` Christoph Hellwig
2022-04-13  5:59     ` [Intel-gfx] " Christoph Hellwig
2022-04-12 15:53 ` [PATCH 7/9] drm/i915/gvt: Delete kvmgt_vdev::vfio_group Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-12 15:53 ` [PATCH 8/9] vfio: Remove dead code Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  6:01   ` Christoph Hellwig
2022-04-13  6:01     ` [Intel-gfx] " Christoph Hellwig
2022-04-12 15:53 ` [PATCH 9/9] vfio: Remove calls to vfio_group_add_container_user() Jason Gunthorpe
2022-04-12 15:53   ` [Intel-gfx] " Jason Gunthorpe
2022-04-12 15:53   ` Jason Gunthorpe
2022-04-13  6:11   ` Christoph Hellwig
2022-04-13  6:11     ` [Intel-gfx] " Christoph Hellwig
2022-04-13 14:03     ` Jason Gunthorpe
2022-04-13 14:03       ` [Intel-gfx] " Jason Gunthorpe
2022-04-13 14:03       ` Jason Gunthorpe
2022-04-13 16:07       ` Christoph Hellwig
2022-04-13 16:07         ` [Intel-gfx] " Christoph Hellwig
2022-04-14 13:51   ` Matthew Rosato
2022-04-14 13:51     ` [Intel-gfx] " Matthew Rosato
2022-04-14 13:51     ` Matthew Rosato
2022-04-14 14:22     ` Jason Gunthorpe
2022-04-14 14:22       ` Jason Gunthorpe
2022-04-14 14:22       ` [Intel-gfx] " Jason Gunthorpe
2022-04-15  2:32       ` Tian, Kevin
2022-04-15  2:32         ` [Intel-gfx] " Tian, Kevin
2022-04-15  2:32         ` Tian, Kevin
2022-04-15 12:07         ` Jason Gunthorpe
2022-04-15 12:07           ` [Intel-gfx] " Jason Gunthorpe
2022-04-15 12:07           ` Jason Gunthorpe
2022-04-15 23:45           ` Tian, Kevin
2022-04-15 23:45             ` [Intel-gfx] " Tian, Kevin
2022-04-15 23:45             ` Tian, Kevin
2022-04-13  5:52 ` [PATCH 0/9] Make the rest of the VFIO driver interface use vfio_device Christoph Hellwig
2022-04-13  5:52   ` [Intel-gfx] " Christoph Hellwig
2022-04-13 12:31 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2022-04-13 12:31 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2022-04-13 12:56 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-04-13 15:21 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2022-04-14 15:21 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Make the rest of the VFIO driver interface use vfio_device (rev2) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.