All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-21 16:28 ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

This series is based on Christoph's gvt rework here:

 https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/

and so will need the PR merged first.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

v2:
 - Based on Christoph's series so mdev_legacy_get_vfio_device() is removed
 - Reflow indenting
 - Use vfio_assert_device_open() and WARN_ON_ONCE instead of open coding
   the assertion
v1: https://lore.kernel.org/r/0-v1-a8faf768d202+125dd-vfio_mdev_no_group_jgg@nvidia.com

Cc: Christoph Hellwig <hch@lst.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: intel-gvt-dev@lists.freedesktop.org
Cc: kvm@vger.kernel.org
Cc: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "Liu, Yi L" <yi.l.liu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (7):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/gvt.h                |   5 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  51 ++--
 drivers/s390/cio/vfio_ccw_cp.c                |  47 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  23 +-
 drivers/vfio/vfio.c                           | 288 ++----------------
 include/linux/vfio.h                          |  21 +-
 10 files changed, 102 insertions(+), 351 deletions(-)


base-commit: 3515cc4aa9440795ab20b87ade2e2727267d469d
-- 
2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-21 16:28 ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

This series is based on Christoph's gvt rework here:

 https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/

and so will need the PR merged first.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

v2:
 - Based on Christoph's series so mdev_legacy_get_vfio_device() is removed
 - Reflow indenting
 - Use vfio_assert_device_open() and WARN_ON_ONCE instead of open coding
   the assertion
v1: https://lore.kernel.org/r/0-v1-a8faf768d202+125dd-vfio_mdev_no_group_jgg@nvidia.com

Cc: Christoph Hellwig <hch@lst.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: intel-gvt-dev@lists.freedesktop.org
Cc: kvm@vger.kernel.org
Cc: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "Liu, Yi L" <yi.l.liu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (7):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/gvt.h                |   5 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  51 ++--
 drivers/s390/cio/vfio_ccw_cp.c                |  47 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  23 +-
 drivers/vfio/vfio.c                           | 288 ++----------------
 include/linux/vfio.h                          |  21 +-
 10 files changed, 102 insertions(+), 351 deletions(-)


base-commit: 3515cc4aa9440795ab20b87ade2e2727267d469d
-- 
2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-21 16:28 ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

Prior series have transformed other parts of VFIO from working on struct
device or struct vfio_group into working directly on struct
vfio_device. Based on that work we now have vfio_device's readily
available in all the drivers.

Update the rest of the driver facing API to use vfio_device as an input.

The following are switched from struct device to struct vfio_device:
  vfio_register_notifier()
  vfio_unregister_notifier()
  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()

The following group APIs are obsoleted and removed by just using struct
vfio_device with the above:
  vfio_group_pin_pages()
  vfio_group_unpin_pages()
  vfio_group_iommu_domain()
  vfio_group_get_external_user_from_dev()

To retain the performance of the new device APIs relative to their group
versions optimize how vfio_group_add_container_user() is used to avoid
calling it when the driver must already guarantee the device is open and
the container_users incrd.

The remaining exported VFIO group interfaces are only used by kvm, and are
addressed by a parallel series.

This series is based on Christoph's gvt rework here:

 https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/

and so will need the PR merged first.

I have a followup series that needs this.

This is also part of the iommufd work - moving the driver facing interface
to vfio_device provides a much cleaner path to integrate with iommufd.

v2:
 - Based on Christoph's series so mdev_legacy_get_vfio_device() is removed
 - Reflow indenting
 - Use vfio_assert_device_open() and WARN_ON_ONCE instead of open coding
   the assertion
v1: https://lore.kernel.org/r/0-v1-a8faf768d202+125dd-vfio_mdev_no_group_jgg@nvidia.com

Cc: Christoph Hellwig <hch@lst.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: intel-gvt-dev@lists.freedesktop.org
Cc: kvm@vger.kernel.org
Cc: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "Liu, Yi L" <yi.l.liu@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (7):
  vfio: Make vfio_(un)register_notifier accept a vfio_device
  vfio/ccw: Remove mdev from struct channel_program
  vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  drm/i915/gvt: Change from vfio_group_(un)pin_pages to
    vfio_(un)pin_pages
  vfio: Remove dead code
  vfio: Remove calls to vfio_group_add_container_user()

 .../driver-api/vfio-mediated-device.rst       |   4 +-
 drivers/gpu/drm/i915/gvt/gvt.h                |   5 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |  51 ++--
 drivers/s390/cio/vfio_ccw_cp.c                |  47 +--
 drivers/s390/cio/vfio_ccw_cp.h                |   4 +-
 drivers/s390/cio/vfio_ccw_fsm.c               |   3 +-
 drivers/s390/cio/vfio_ccw_ops.c               |   7 +-
 drivers/s390/crypto/vfio_ap_ops.c             |  23 +-
 drivers/vfio/vfio.c                           | 288 ++----------------
 include/linux/vfio.h                          |  21 +-
 10 files changed, 102 insertions(+), 351 deletions(-)


base-commit: 3515cc4aa9440795ab20b87ade2e2727267d469d
-- 
2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 24 ++++++++++++------------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/vfio.h              |  4 ++--
 5 files changed, 31 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 0787ba5c301f5e..1cec4f1fdfaced 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -810,8 +810,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY, &events,
-				&vgpu->iommu_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
+				     &vgpu->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
 			ret);
@@ -819,8 +819,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY, &events,
-				&vgpu->group_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
+				     &vgpu->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
 			ret);
@@ -873,12 +873,12 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
+				 &vgpu->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
+				 &vgpu->iommu_notifier);
 out:
 	return ret;
 }
@@ -907,13 +907,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_release_vgpu(vgpu);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_IOMMU_NOTIFY,
+				       &vgpu->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_GROUP_NOTIFY,
+				       &vgpu->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
 
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index c4d60cdbf247bf..b49e2e9db2dc6f 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index ee0a3bf8f476ca..bfa7ee6ef532d9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 24 ++++++++++++------------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/vfio.h              |  4 ++--
 5 files changed, 31 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 0787ba5c301f5e..1cec4f1fdfaced 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -810,8 +810,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY, &events,
-				&vgpu->iommu_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
+				     &vgpu->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
 			ret);
@@ -819,8 +819,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY, &events,
-				&vgpu->group_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
+				     &vgpu->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
 			ret);
@@ -873,12 +873,12 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
+				 &vgpu->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
+				 &vgpu->iommu_notifier);
 out:
 	return ret;
 }
@@ -907,13 +907,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_release_vgpu(vgpu);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_IOMMU_NOTIFY,
+				       &vgpu->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_GROUP_NOTIFY,
+				       &vgpu->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
 
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index c4d60cdbf247bf..b49e2e9db2dc6f 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index ee0a3bf8f476ca..bfa7ee6ef532d9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

All callers have a struct vfio_device trivially available, pass it in
directly and avoid calling the expensive vfio_group_get_from_dev().

To support the unconverted kvmgt mdev driver add
mdev_legacy_get_vfio_device() which will return the vfio_device pointer
vfio_mdev.c puts in the drv_data.

Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/kvmgt.c  | 24 ++++++++++++------------
 drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
 drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
 drivers/vfio/vfio.c               | 25 +++++++------------------
 include/linux/vfio.h              |  4 ++--
 5 files changed, 31 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 0787ba5c301f5e..1cec4f1fdfaced 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -810,8 +810,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY, &events,
-				&vgpu->iommu_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
+				     &vgpu->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
 			ret);
@@ -819,8 +819,8 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	}
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
-	ret = vfio_register_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY, &events,
-				&vgpu->group_notifier);
+	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
+				     &vgpu->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
 			ret);
@@ -873,12 +873,12 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	vgpu->vfio_group = NULL;
 
 undo_register:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
+				 &vgpu->group_notifier);
 
 undo_iommu:
-	vfio_unregister_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
+				 &vgpu->iommu_notifier);
 out:
 	return ret;
 }
@@ -907,13 +907,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_gvt_release_vgpu(vgpu);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_IOMMU_NOTIFY,
-					&vgpu->iommu_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_IOMMU_NOTIFY,
+				       &vgpu->iommu_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(vgpu->vfio_device.dev, VFIO_GROUP_NOTIFY,
-					&vgpu->group_notifier);
+	ret = vfio_unregister_notifier(&vgpu->vfio_device, VFIO_GROUP_NOTIFY,
+				       &vgpu->group_notifier);
 	drm_WARN(&i915->drm, ret,
 		 "vfio_unregister_notifier for group failed: %d\n", ret);
 
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
index c4d60cdbf247bf..b49e2e9db2dc6f 100644
--- a/drivers/s390/cio/vfio_ccw_ops.c
+++ b/drivers/s390/cio/vfio_ccw_ops.c
@@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 	private->nb.notifier_call = vfio_ccw_mdev_notifier;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				     &events, &private->nb);
 	if (ret)
 		return ret;
@@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct vfio_device *vdev)
 
 out_unregister:
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				 &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 	return ret;
 }
 
@@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct vfio_device *vdev)
 
 	cp_free(&private->cp);
 	vfio_ccw_unregister_dev_regions(private);
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private->nb);
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
 }
 
 static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private *private,
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index ee0a3bf8f476ca..bfa7ee6ef532d9 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct vfio_device *vdev)
 	matrix_mdev->group_notifier.notifier_call = vfio_ap_mdev_group_notifier;
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 
-	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
-				     &events, &matrix_mdev->group_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
+				     &matrix_mdev->group_notifier);
 	if (ret)
 		return ret;
 
 	matrix_mdev->iommu_notifier.notifier_call = vfio_ap_mdev_iommu_notifier;
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
-	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
-				     &events, &matrix_mdev->iommu_notifier);
+	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
+				     &matrix_mdev->iommu_notifier);
 	if (ret)
 		goto out_unregister_group;
 	return 0;
 
 out_unregister_group:
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	return ret;
 }
@@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct vfio_device *vdev)
 	struct ap_matrix_mdev *matrix_mdev =
 		container_of(vdev, struct ap_matrix_mdev, vdev);
 
-	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
 				 &matrix_mdev->iommu_notifier);
-	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
+	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
 				 &matrix_mdev->group_notifier);
 	vfio_ap_mdev_unset_kvm(matrix_mdev);
 }
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index a4555014bd1e72..8a5c46aa2bef61 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct vfio_group *group,
 	return ret;
 }
 
-int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 			   unsigned long *events, struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0))
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_register_iommu_notifier(group, events, nb);
@@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_register_notifier);
 
-int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
+int vfio_unregister_notifier(struct vfio_device *dev,
+			     enum vfio_notify_type type,
 			     struct notifier_block *nb)
 {
-	struct vfio_group *group;
+	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!dev || !nb)
+	if (!nb)
 		return -EINVAL;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
 	switch (type) {
 	case VFIO_IOMMU_NOTIFY:
 		ret = vfio_unregister_iommu_notifier(group, nb);
@@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
 	default:
 		ret = -EINVAL;
 	}
-
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 66dda06ec42d1b..748ec0e0293aea 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -178,11 +178,11 @@ enum vfio_notify_type {
 /* events for VFIO_GROUP_NOTIFY */
 #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
 
-extern int vfio_register_notifier(struct device *dev,
+extern int vfio_register_notifier(struct vfio_device *dev,
 				  enum vfio_notify_type type,
 				  unsigned long *required_events,
 				  struct notifier_block *nb);
-extern int vfio_unregister_notifier(struct device *dev,
+extern int vfio_unregister_notifier(struct vfio_device *dev,
 				    enum vfio_notify_type type,
 				    struct notifier_block *nb);
 
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 2/7] vfio/ccw: Remove mdev from struct channel_program
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Reviewed-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 47 ++++++++++++++++++++-------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..7a1cf3091cd647 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,8 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
-			   void *to, u64 iova,
+static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova,
 			   unsigned long n)
 {
 	struct pfn_array pa = {0};
@@ -203,9 +203,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +226,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +423,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +510,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +530,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +559,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +578,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +594,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +636,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +656,12 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(
+			vdev->dev,
+			"Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +689,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +700,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 2/7] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Reviewed-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 47 ++++++++++++++++++++-------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..7a1cf3091cd647 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,8 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
-			   void *to, u64 iova,
+static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova,
 			   unsigned long n)
 {
 	struct pfn_array pa = {0};
@@ -203,9 +203,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +226,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +423,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +510,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +530,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +559,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +578,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +594,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +636,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +656,12 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(
+			vdev->dev,
+			"Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +689,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +700,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 2/7] vfio/ccw: Remove mdev from struct channel_program
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

The next patch wants the vfio_device instead. There is no reason to store
a pointer here since we can container_of back to the vfio_device.

Reviewed-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/s390/cio/vfio_ccw_cp.c  | 47 ++++++++++++++++++++-------------
 drivers/s390/cio/vfio_ccw_cp.h  |  4 +--
 drivers/s390/cio/vfio_ccw_fsm.c |  3 +--
 3 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 8d1b2771c1aa02..7a1cf3091cd647 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -16,6 +16,7 @@
 #include <asm/idals.h>
 
 #include "vfio_ccw_cp.h"
+#include "vfio_ccw_private.h"
 
 struct pfn_array {
 	/* Starting guest physical I/O address. */
@@ -98,17 +99,17 @@ static int pfn_array_alloc(struct pfn_array *pa, u64 iova, unsigned int len)
  * If the pin request partially succeeds, or fails completely,
  * all pages are left unpinned and a negative error value is returned.
  */
-static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
+static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -122,11 +123,11 @@ static int pfn_array_pin(struct pfn_array *pa, struct device *mdev)
 }
 
 /* Unpin the pages before releasing the memory. */
-static void pfn_array_unpin_free(struct pfn_array *pa, struct device *mdev)
+static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(mdev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
@@ -190,8 +191,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
  * Within the domain (@mdev), copy @n bytes from a guest physical
  * address (@iova) to a host physical address (@to).
  */
-static long copy_from_iova(struct device *mdev,
-			   void *to, u64 iova,
+static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova,
 			   unsigned long n)
 {
 	struct pfn_array pa = {0};
@@ -203,9 +203,9 @@ static long copy_from_iova(struct device *mdev,
 	if (ret < 0)
 		return ret;
 
-	ret = pfn_array_pin(&pa, mdev);
+	ret = pfn_array_pin(&pa, vdev);
 	if (ret < 0) {
-		pfn_array_unpin_free(&pa, mdev);
+		pfn_array_unpin_free(&pa, vdev);
 		return ret;
 	}
 
@@ -226,7 +226,7 @@ static long copy_from_iova(struct device *mdev,
 			break;
 	}
 
-	pfn_array_unpin_free(&pa, mdev);
+	pfn_array_unpin_free(&pa, vdev);
 
 	return l;
 }
@@ -423,11 +423,13 @@ static int ccwchain_loop_tic(struct ccwchain *chain,
 
 static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain;
 	int len, ret;
 
 	/* Copy 2K (the most we support today) of possible CCWs */
-	len = copy_from_iova(cp->mdev, cp->guest_cp, cda,
+	len = copy_from_iova(vdev, cp->guest_cp, cda,
 			     CCWCHAIN_LEN_MAX * sizeof(struct ccw1));
 	if (len)
 		return len;
@@ -508,6 +510,8 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 				 int idx,
 				 struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccw1 *ccw;
 	struct pfn_array *pa;
 	u64 iova;
@@ -526,7 +530,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	if (ccw_is_idal(ccw)) {
 		/* Read first IDAW to see if it's 4K-aligned or not. */
 		/* All subsequent IDAws will be 4K-aligned. */
-		ret = copy_from_iova(cp->mdev, &iova, ccw->cda, sizeof(iova));
+		ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova));
 		if (ret)
 			return ret;
 	} else {
@@ -555,7 +559,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 
 	if (ccw_is_idal(ccw)) {
 		/* Copy guest IDAL into host IDAL */
-		ret = copy_from_iova(cp->mdev, idaws, ccw->cda, idal_len);
+		ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len);
 		if (ret)
 			goto out_unpin;
 
@@ -574,7 +578,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	}
 
 	if (ccw_does_data_transfer(ccw)) {
-		ret = pfn_array_pin(pa, cp->mdev);
+		ret = pfn_array_pin(pa, vdev);
 		if (ret < 0)
 			goto out_unpin;
 	} else {
@@ -590,7 +594,7 @@ static int ccwchain_fetch_direct(struct ccwchain *chain,
 	return 0;
 
 out_unpin:
-	pfn_array_unpin_free(pa, cp->mdev);
+	pfn_array_unpin_free(pa, vdev);
 out_free_idaws:
 	kfree(idaws);
 out_init:
@@ -632,8 +636,10 @@ static int ccwchain_fetch_one(struct ccwchain *chain,
  * Returns:
  *   %0 on success and a negative error value on failure.
  */
-int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
+int cp_init(struct channel_program *cp, union orb *orb)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	/* custom ratelimit used to avoid flood during guest IPL */
 	static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 1);
 	int ret;
@@ -650,11 +656,12 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
 	 * the problem if something does break.
 	 */
 	if (!orb->cmd.pfch && __ratelimit(&ratelimit_state))
-		dev_warn(mdev, "Prefetching channel program even though prefetch not specified in ORB");
+		dev_warn(
+			vdev->dev,
+			"Prefetching channel program even though prefetch not specified in ORB");
 
 	INIT_LIST_HEAD(&cp->ccwchain_list);
 	memcpy(&cp->orb, orb, sizeof(*orb));
-	cp->mdev = mdev;
 
 	/* Build a ccwchain for the first CCW segment */
 	ret = ccwchain_handle_ccw(orb->cmd.cpa, cp);
@@ -682,6 +689,8 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb)
  */
 void cp_free(struct channel_program *cp)
 {
+	struct vfio_device *vdev =
+		&container_of(cp, struct vfio_ccw_private, cp)->vdev;
 	struct ccwchain *chain, *temp;
 	int i;
 
@@ -691,7 +700,7 @@ void cp_free(struct channel_program *cp)
 	cp->initialized = false;
 	list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) {
 		for (i = 0; i < chain->ch_len; i++) {
-			pfn_array_unpin_free(chain->ch_pa + i, cp->mdev);
+			pfn_array_unpin_free(chain->ch_pa + i, vdev);
 			ccwchain_cda_free(chain, i);
 		}
 		ccwchain_free(chain);
diff --git a/drivers/s390/cio/vfio_ccw_cp.h b/drivers/s390/cio/vfio_ccw_cp.h
index ba31240ce96594..e4c436199b4cda 100644
--- a/drivers/s390/cio/vfio_ccw_cp.h
+++ b/drivers/s390/cio/vfio_ccw_cp.h
@@ -37,13 +37,11 @@
 struct channel_program {
 	struct list_head ccwchain_list;
 	union orb orb;
-	struct device *mdev;
 	bool initialized;
 	struct ccw1 *guest_cp;
 };
 
-extern int cp_init(struct channel_program *cp, struct device *mdev,
-		   union orb *orb);
+extern int cp_init(struct channel_program *cp, union orb *orb);
 extern void cp_free(struct channel_program *cp);
 extern int cp_prefetch(struct channel_program *cp);
 extern union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm);
diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c
index e435a9cd92dacf..8483a266051c21 100644
--- a/drivers/s390/cio/vfio_ccw_fsm.c
+++ b/drivers/s390/cio/vfio_ccw_fsm.c
@@ -262,8 +262,7 @@ static void fsm_io_request(struct vfio_ccw_private *private,
 			errstr = "transport mode";
 			goto err_out;
 		}
-		io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
-					      orb);
+		io_region->ret_code = cp_init(&private->cp, orb);
 		if (io_region->ret_code) {
 			VFIO_CCW_MSG_EVENT(2,
 					   "%pUl (%x.%x.%04x): cp_init=%d\n",
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 39 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 784bbeb22adcf5..2f9e6025eb2fe1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 7a1cf3091cd647..0c2be9421ab78f 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index bfa7ee6ef532d9..e8914024f5b1af 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
-				 &q->saved_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
 }
@@ -258,7 +257,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +300,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 39 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 784bbeb22adcf5..2f9e6025eb2fe1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 7a1cf3091cd647..0c2be9421ab78f 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index bfa7ee6ef532d9..e8914024f5b1af 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
-				 &q->saved_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
 }
@@ -258,7 +257,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +300,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .../driver-api/vfio-mediated-device.rst       |  4 +-
 drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
 drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
 drivers/vfio/vfio.c                           | 40 ++++++-------------
 include/linux/vfio.h                          |  4 +-
 5 files changed, 24 insertions(+), 39 deletions(-)

diff --git a/Documentation/driver-api/vfio-mediated-device.rst b/Documentation/driver-api/vfio-mediated-device.rst
index 784bbeb22adcf5..2f9e6025eb2fe1 100644
--- a/Documentation/driver-api/vfio-mediated-device.rst
+++ b/Documentation/driver-api/vfio-mediated-device.rst
@@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
 The following APIs are provided for translating user pfn to host pfn in a VFIO
 driver::
 
-	extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				  int npage, int prot, unsigned long *phys_pfn);
 
-	extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 				    int npage);
 
 These functions call back into the back-end IOMMU module by using the pin_pages
diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
index 7a1cf3091cd647..0c2be9421ab78f 100644
--- a/drivers/s390/cio/vfio_ccw_cp.c
+++ b/drivers/s390/cio/vfio_ccw_cp.c
@@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	int ret = 0;
 
-	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
+	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
 			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
 
 	if (ret < 0) {
 		goto err_out;
 	} else if (ret > 0 && ret != pa->pa_nr) {
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
 		ret = -EINVAL;
 		goto err_out;
 	}
@@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array *pa, struct vfio_device *vdev)
 {
 	/* Only unpin if any pages were pinned to begin with */
 	if (pa->pa_nr)
-		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
+		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
 	pa->pa_nr = 0;
 	kfree(pa->pa_iova_pfn);
 }
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index bfa7ee6ef532d9..e8914024f5b1af 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct vfio_ap_queue *q)
 		q->saved_isc = VFIO_AP_ISC_INVALID;
 	}
 	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
-				 &q->saved_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn, 1);
 		q->saved_pfn = 0;
 	}
 }
@@ -258,7 +257,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		return status;
 	}
 
-	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
+	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
 			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
 	switch (ret) {
 	case 1:
@@ -301,7 +300,7 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
 		break;
 	case AP_RESPONSE_OTHERWISE_CHANGED:
 		/* We could not modify IRQ setings: clear new configuration */
-		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
 		kvm_s390_gisc_unregister(kvm, isc);
 		break;
 	default:
@@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
 		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
 
-		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn, 1);
+		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
 		return NOTIFY_OK;
 	}
 
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 8a5c46aa2bef61..24b92a45cfc8f1 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2142,32 +2142,26 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * @phys_pfn[out]: array of host PFNs
  * Return error or number of pages pinned.
  */
-int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
+int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 		   int prot, unsigned long *phys_pfn)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
+	struct vfio_group *group = vdev->group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	if (group->dev_counter > 1) {
-		ret = -EINVAL;
-		goto err_pin_pages;
-	}
+	if (group->dev_counter > 1)
+		return -EINVAL;
 
 	ret = vfio_group_add_container_user(group);
 	if (ret)
-		goto err_pin_pages;
+		return ret;
 
 	container = group->container;
 	driver = container->iommu_driver;
@@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
 
 	vfio_group_try_dissolve_container(group);
 
-err_pin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
  *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
  * Return error or number of pages unpinned.
  */
-int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
+int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
+		     int npage)
 {
 	struct vfio_container *container;
-	struct vfio_group *group;
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!dev || !user_pfn || !npage)
+	if (!user_pfn || !npage)
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return -ENODEV;
-
-	ret = vfio_group_add_container_user(group);
+	ret = vfio_group_add_container_user(vdev->group);
 	if (ret)
-		goto err_unpin_pages;
+		return ret;
 
-	container = group->container;
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
 		ret = driver->ops->unpin_pages(container->iommu_data, user_pfn,
@@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
+	vfio_group_try_dissolve_container(vdev->group);
 
-err_unpin_pages:
-	vfio_group_put(group);
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 748ec0e0293aea..8f2a09801a660b 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct vfio_group *group,
 
 #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned long))
 
-extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
-extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
+extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
 
 extern int vfio_group_pin_pages(struct vfio_group *group,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
 drivers/vfio/vfio.c            | 24 +++++++++++-------------
 include/linux/vfio.h           |  2 +-
 3 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 03ecffc2ba56a9..5a28ee965b7f3e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu *vgpu, unsigned long gpa,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
 }
 
 /**
@@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu *vgpu,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
 }
 
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..d10d20d393b706 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
-		void *data, size_t len, bool write)
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
+		size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
 drivers/vfio/vfio.c            | 24 +++++++++++-------------
 include/linux/vfio.h           |  2 +-
 3 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 03ecffc2ba56a9..5a28ee965b7f3e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu *vgpu, unsigned long gpa,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
 }
 
 /**
@@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu *vgpu,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
 }
 
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..d10d20d393b706 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
-		void *data, size_t len, bool write)
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
+		size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. The struct vfio_device already
contains the group we need so this avoids complexity, extra refcountings,
and a confusing lifecycle model.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
 drivers/vfio/vfio.c            | 24 +++++++++++-------------
 include/linux/vfio.h           |  2 +-
 3 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 03ecffc2ba56a9..5a28ee965b7f3e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu *vgpu, unsigned long gpa,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
 }
 
 /**
@@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu *vgpu,
 {
 	if (!vgpu->attached)
 		return -ESRCH;
-	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
+	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
 }
 
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index 24b92a45cfc8f1..d10d20d393b706 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
  * As the read/write of user space memory is conducted via the CPUs and is
  * not a real device DMA, it is not necessary to pin the user space memory.
  *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
+ * @vdev [in]		: VFIO device
  * @user_iova [in]	: base IOVA of a user space buffer
  * @data [in]		: pointer to kernel buffer
  * @len [in]		: kernel buffer length
  * @write		: indicate read or write
  * Return error code on failure or 0 on success.
  */
-int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
-		void *data, size_t len, bool write)
+int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
+		size_t len, bool write)
 {
 	struct vfio_container *container;
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!group || !data || len <= 0)
+	if (!data || len <= 0)
 		return -EINVAL;
 
-	container = group->container;
+	ret = vfio_group_add_container_user(vdev->group);
+	if (ret)
+		return ret;
+
+	container = vdev->group->container;
 	driver = container->iommu_driver;
 
 	if (likely(driver && driver->ops->dma_rw))
@@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
 	else
 		ret = -ENOTTY;
 
+	vfio_group_try_dissolve_container(vdev->group);
+
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 8f2a09801a660b..91d46e532ca104 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group *group,
 extern int vfio_group_unpin_pages(struct vfio_group *group,
 				  unsigned long *user_iova_pfn, int npage);
 
-extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
+extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
 extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Delete kvmgt_vdev::vfio_group since these calls were the last users.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
 drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
 2 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 5a28ee965b7f3e..2af4c83e733c6c 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -231,7 +231,6 @@ struct intel_vgpu {
 	struct kvm *kvm;
 	struct work_struct release_work;
 	atomic_t released;
-	struct vfio_group *vfio_group;
 
 	struct kvm_page_track_notifier_node track_node;
 #define NR_BKT (1 << 18)
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 1cec4f1fdfaced..7655ffa97d5116 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
@@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 		goto undo_iommu;
 	}
 
-	vfio_group =
-		vfio_group_get_external_user_from_dev(vgpu->vfio_device.dev);
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vgpu->vfio_group = vfio_group;
-
 	ret = -EEXIST;
 	if (vgpu->attached)
-		goto undo_group;
+		goto undo_register;
 
 	ret = -ESRCH;
 	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
 		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = -EEXIST;
 	if (__kvmgt_vgpu_exist(vgpu))
-		goto undo_group;
+		goto undo_register;
 
 	vgpu->attached = true;
 	kvm_get_kvm(vgpu->kvm);
@@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	atomic_set(&vgpu->released, 0);
 	return 0;
 
-undo_group:
-	vfio_group_put_external_user(vgpu->vfio_group);
-	vgpu->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 				 &vgpu->group_notifier);
@@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	gvt_cache_destroy(vgpu);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vgpu->vfio_group);
 
 	vgpu->kvm = NULL;
 	vgpu->attached = false;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Delete kvmgt_vdev::vfio_group since these calls were the last users.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
 drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
 2 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 5a28ee965b7f3e..2af4c83e733c6c 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -231,7 +231,6 @@ struct intel_vgpu {
 	struct kvm *kvm;
 	struct work_struct release_work;
 	atomic_t released;
-	struct vfio_group *vfio_group;
 
 	struct kvm_page_track_notifier_node track_node;
 #define NR_BKT (1 << 18)
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 1cec4f1fdfaced..7655ffa97d5116 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
@@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 		goto undo_iommu;
 	}
 
-	vfio_group =
-		vfio_group_get_external_user_from_dev(vgpu->vfio_device.dev);
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vgpu->vfio_group = vfio_group;
-
 	ret = -EEXIST;
 	if (vgpu->attached)
-		goto undo_group;
+		goto undo_register;
 
 	ret = -ESRCH;
 	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
 		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = -EEXIST;
 	if (__kvmgt_vgpu_exist(vgpu))
-		goto undo_group;
+		goto undo_register;
 
 	vgpu->attached = true;
 	kvm_get_kvm(vgpu->kvm);
@@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	atomic_set(&vgpu->released, 0);
 	return 0;
 
-undo_group:
-	vfio_group_put_external_user(vgpu->vfio_group);
-	vgpu->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 				 &vgpu->group_notifier);
@@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	gvt_cache_destroy(vgpu);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vgpu->vfio_group);
 
 	vgpu->kvm = NULL;
 	vgpu->attached = false;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
reason to use a group interface here, kvmgt has easy access to a
vfio_device.

Delete kvmgt_vdev::vfio_group since these calls were the last users.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
 drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
 2 files changed, 6 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 5a28ee965b7f3e..2af4c83e733c6c 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -231,7 +231,6 @@ struct intel_vgpu {
 	struct kvm *kvm;
 	struct work_struct release_work;
 	atomic_t released;
-	struct vfio_group *vfio_group;
 
 	struct kvm_page_track_notifier_node track_node;
 #define NR_BKT (1 << 18)
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 1cec4f1fdfaced..7655ffa97d5116 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn, 1);
+		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
 		drm_WARN_ON(&i915->drm, ret != 1);
 	}
 }
@@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
-					   IOMMU_READ | IOMMU_WRITE, &pfn);
+		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
+				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
 				     cur_gfn, ret);
@@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
 	unsigned long events;
 	int ret;
-	struct vfio_group *vfio_group;
 
 	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
 	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
@@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 		goto undo_iommu;
 	}
 
-	vfio_group =
-		vfio_group_get_external_user_from_dev(vgpu->vfio_device.dev);
-	if (IS_ERR_OR_NULL(vfio_group)) {
-		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
-		gvt_vgpu_err("vfio_group_get_external_user_from_dev failed\n");
-		goto undo_register;
-	}
-	vgpu->vfio_group = vfio_group;
-
 	ret = -EEXIST;
 	if (vgpu->attached)
-		goto undo_group;
+		goto undo_register;
 
 	ret = -ESRCH;
 	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
 		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
-		goto undo_group;
+		goto undo_register;
 	}
 
 	ret = -EEXIST;
 	if (__kvmgt_vgpu_exist(vgpu))
-		goto undo_group;
+		goto undo_register;
 
 	vgpu->attached = true;
 	kvm_get_kvm(vgpu->kvm);
@@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device *vfio_dev)
 	atomic_set(&vgpu->released, 0);
 	return 0;
 
-undo_group:
-	vfio_group_put_external_user(vgpu->vfio_group);
-	vgpu->vfio_group = NULL;
-
 undo_register:
 	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
 				 &vgpu->group_notifier);
@@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	gvt_cache_destroy(vgpu);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
-	vfio_group_put_external_user(vgpu->vfio_group);
 
 	vgpu->kvm = NULL;
 	vgpu->attached = false;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 6/7] vfio: Remove dead code
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

vfio_group_put_external_user() is removable too when combined with the KVM series

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index d10d20d393b706..ba6fae95555ec7 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 6/7] vfio: Remove dead code
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

vfio_group_put_external_user() is removable too when combined with the KVM series

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index d10d20d393b706..ba6fae95555ec7 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 6/7] vfio: Remove dead code
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

Now that callers have been updated to use the vfio_device APIs the driver
facing group interface is no longer used, delete it:

- vfio_group_get_external_user_from_dev()
- vfio_group_pin_pages()
- vfio_group_unpin_pages()
- vfio_group_iommu_domain()

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c  | 151 -------------------------------------------
 include/linux/vfio.h |  11 ----
 2 files changed, 162 deletions(-)

vfio_group_put_external_user() is removable too when combined with the KVM series

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index d10d20d393b706..ba6fae95555ec7 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1947,44 +1947,6 @@ struct vfio_group *vfio_group_get_external_user(struct file *filep)
 }
 EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
 
-/*
- * External user API, exported by symbols to be linked dynamically.
- * The external user passes in a device pointer
- * to verify that:
- *	- A VFIO group is assiciated with the device;
- *	- IOMMU is set for the group.
- * If both checks passed, vfio_group_get_external_user_from_dev()
- * increments the container user counter to prevent the VFIO group
- * from disposal before external user exits and returns the pointer
- * to the VFIO group.
- *
- * When the external user finishes using the VFIO group, it calls
- * vfio_group_put_external_user() to release the VFIO group and
- * decrement the container user counter.
- *
- * @dev [in]	: device
- * Return error PTR or pointer to VFIO group.
- */
-
-struct vfio_group *vfio_group_get_external_user_from_dev(struct device *dev)
-{
-	struct vfio_group *group;
-	int ret;
-
-	group = vfio_group_get_from_dev(dev);
-	if (!group)
-		return ERR_PTR(-ENODEV);
-
-	ret = vfio_group_add_container_user(group);
-	if (ret) {
-		vfio_group_put(group);
-		return ERR_PTR(ret);
-	}
-
-	return group;
-}
-EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
-
 void vfio_group_put_external_user(struct vfio_group *group)
 {
 	vfio_group_try_dissolve_container(group);
@@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
 
-/*
- * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
- * VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: VFIO group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * @prot [in]		: protection flags
- * @phys_pfn [out]	: array of host PFNs
- * Return error or number of pages pinned.
- */
-int vfio_group_pin_pages(struct vfio_group *group,
-			 unsigned long *user_iova_pfn, int npage,
-			 int prot, unsigned long *phys_pfn)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !phys_pfn || !npage)
-		return -EINVAL;
-
-	if (group->dev_counter > 1)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->pin_pages))
-		ret = driver->ops->pin_pages(container->iommu_data,
-					     group->iommu_group, user_iova_pfn,
-					     npage, prot, phys_pfn);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_pin_pages);
-
-/*
- * Unpin a set of guest IOVA PFNs for a VFIO group.
- *
- * The caller needs to call vfio_group_get_external_user() or
- * vfio_group_get_external_user_from_dev() prior to calling this interface,
- * so as to prevent the VFIO group from disposal in the middle of the call.
- * But it can keep the reference to the VFIO group for several calls into
- * this interface.
- * After finishing using of the VFIO group, the caller needs to release the
- * VFIO group by calling vfio_group_put_external_user().
- *
- * @group [in]		: vfio group
- * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be unpinned.
- * @npage [in]		: count of elements in user_iova_pfn array.
- *			  This count should not be greater than
- *			  VFIO_PIN_PAGES_MAX_ENTRIES.
- * Return error or number of pages unpinned.
- */
-int vfio_group_unpin_pages(struct vfio_group *group,
-			   unsigned long *user_iova_pfn, int npage)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-	int ret;
-
-	if (!group || !user_iova_pfn || !npage)
-		return -EINVAL;
-
-	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
-		return -E2BIG;
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->unpin_pages))
-		ret = driver->ops->unpin_pages(container->iommu_data,
-					       user_iova_pfn, npage);
-	else
-		ret = -ENOTTY;
-
-	return ret;
-}
-EXPORT_SYMBOL(vfio_group_unpin_pages);
-
-
 /*
  * This interface allows the CPUs to perform some sort of virtual DMA on
  * behalf of the device.
@@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 }
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
-struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group)
-{
-	struct vfio_container *container;
-	struct vfio_iommu_driver *driver;
-
-	if (!group)
-		return ERR_PTR(-EINVAL);
-
-	container = group->container;
-	driver = container->iommu_driver;
-	if (likely(driver && driver->ops->group_iommu_domain))
-		return driver->ops->group_iommu_domain(container->iommu_data,
-						       group->iommu_group);
-
-	return ERR_PTR(-ENOTTY);
-}
-EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
-
 /*
  * Module/class support
  */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 91d46e532ca104..9a9981c2622896 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device *device,
  */
 extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
 extern void vfio_group_put_external_user(struct vfio_group *group);
-extern struct vfio_group *vfio_group_get_external_user_from_dev(struct device
-								*dev);
 extern bool vfio_external_group_match_file(struct vfio_group *group,
 					   struct file *filep);
 extern int vfio_external_user_iommu_id(struct vfio_group *group);
@@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			  int npage, int prot, unsigned long *phys_pfn);
 extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 			    int npage);
-
-extern int vfio_group_pin_pages(struct vfio_group *group,
-				unsigned long *user_iova_pfn, int npage,
-				int prot, unsigned long *phys_pfn);
-extern int vfio_group_unpin_pages(struct vfio_group *group,
-				  unsigned long *user_iova_pfn, int npage);
-
 extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
 		       void *data, size_t len, bool write);
 
-extern struct iommu_domain *vfio_group_iommu_domain(struct vfio_group *group);
-
 /* each type has independent events */
 enum vfio_notify_type {
 	VFIO_IOMMU_NOTIFY = 0,
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-21 16:28   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L, Christoph Hellwig

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between open_device() and
close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 78 +++++++++------------------------------------
 1 file changed, 15 insertions(+), 63 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index ba6fae95555ec7..b566ae3d320b36 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
-	if (!--device->open_count && device->ops->close_device)
+	vfio_assert_device_open(device);
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);
@@ -2112,7 +2120,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,10 +2129,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2134,8 +2138,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2156,16 +2158,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2174,8 +2172,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2204,13 +2200,9 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 
@@ -2219,9 +2211,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2223,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2230,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2240,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2247,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2275,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2284,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2341,7 +2293,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2365,7 +2317,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2373,7 +2325,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between open_device() and
close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 78 +++++++++------------------------------------
 1 file changed, 15 insertions(+), 63 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index ba6fae95555ec7..b566ae3d320b36 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
-	if (!--device->open_count && device->ops->close_device)
+	vfio_assert_device_open(device);
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);
@@ -2112,7 +2120,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,10 +2129,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2134,8 +2138,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2156,16 +2158,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2174,8 +2172,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2204,13 +2200,9 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 
@@ -2219,9 +2211,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2223,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2230,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2240,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2247,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2275,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2284,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2341,7 +2293,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2365,7 +2317,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2373,7 +2325,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-21 16:28   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-21 16:28 UTC (permalink / raw)
  To: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu, Yi L

When the open_device() op is called the container_users is incremented and
held incremented until close_device(). Thus, so long as drivers call
functions within their open_device()/close_device() region they do not
need to worry about the container_users.

These functions can all only be called between open_device() and
close_device():

  vfio_pin_pages()
  vfio_unpin_pages()
  vfio_dma_rw()
  vfio_register_notifier()
  vfio_unregister_notifier()

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/vfio/vfio.c | 78 +++++++++------------------------------------
 1 file changed, 15 insertions(+), 63 deletions(-)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index ba6fae95555ec7..b566ae3d320b36 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -1330,6 +1330,12 @@ static int vfio_group_add_container_user(struct vfio_group *group)
 
 static const struct file_operations vfio_device_fops;
 
+/* true if the vfio_device has open_device() called but not close_device() */
+static bool vfio_assert_device_open(struct vfio_device *device)
+{
+	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
+}
+
 static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
 {
 	struct vfio_device *device;
@@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 	struct vfio_device *device = filep->private_data;
 
 	mutex_lock(&device->dev_set->lock);
-	if (!--device->open_count && device->ops->close_device)
+	vfio_assert_device_open(device);
+	if (device->open_count == 1 && device->ops->close_device)
 		device->ops->close_device(device);
+	device->open_count--;
 	mutex_unlock(&device->dev_set->lock);
 
 	module_put(device->dev->driver->owner);
@@ -2112,7 +2120,7 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !phys_pfn || !npage)
+	if (!user_pfn || !phys_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
@@ -2121,10 +2129,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	if (group->dev_counter > 1)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return ret;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->pin_pages))
@@ -2134,8 +2138,6 @@ int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int npage,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_pin_pages);
@@ -2156,16 +2158,12 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	if (!user_pfn || !npage)
+	if (!user_pfn || !npage || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unpin_pages))
@@ -2174,8 +2172,6 @@ int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
 	else
 		ret = -ENOTTY;
 
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_unpin_pages);
@@ -2204,13 +2200,9 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 	struct vfio_iommu_driver *driver;
 	int ret = 0;
 
-	if (!data || len <= 0)
+	if (!data || len <= 0 || !vfio_assert_device_open(vdev))
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(vdev->group);
-	if (ret)
-		return ret;
-
 	container = vdev->group->container;
 	driver = container->iommu_driver;
 
@@ -2219,9 +2211,6 @@ int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void *data,
 					  user_iova, data, len, write);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(vdev->group);
-
 	return ret;
 }
 EXPORT_SYMBOL(vfio_dma_rw);
@@ -2234,10 +2223,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->register_notifier))
@@ -2245,9 +2230,6 @@ static int vfio_register_iommu_notifier(struct vfio_group *group,
 						     events, nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2258,10 +2240,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 	struct vfio_iommu_driver *driver;
 	int ret;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	container = group->container;
 	driver = container->iommu_driver;
 	if (likely(driver && driver->ops->unregister_notifier))
@@ -2269,9 +2247,6 @@ static int vfio_unregister_iommu_notifier(struct vfio_group *group,
 						       nb);
 	else
 		ret = -ENOTTY;
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2300,10 +2275,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (*events)
 		return -EINVAL;
 
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
 	ret = blocking_notifier_chain_register(&group->notifier, nb);
 
 	/*
@@ -2313,25 +2284,6 @@ static int vfio_register_group_notifier(struct vfio_group *group,
 	if (!ret && set_kvm && group->kvm)
 		blocking_notifier_call_chain(&group->notifier,
 					VFIO_GROUP_NOTIFY_SET_KVM, group->kvm);
-
-	vfio_group_try_dissolve_container(group);
-
-	return ret;
-}
-
-static int vfio_unregister_group_notifier(struct vfio_group *group,
-					 struct notifier_block *nb)
-{
-	int ret;
-
-	ret = vfio_group_add_container_user(group);
-	if (ret)
-		return -EINVAL;
-
-	ret = blocking_notifier_chain_unregister(&group->notifier, nb);
-
-	vfio_group_try_dissolve_container(group);
-
 	return ret;
 }
 
@@ -2341,7 +2293,7 @@ int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type type,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb || !events || (*events == 0))
+	if (!nb || !events || (*events == 0) || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2365,7 +2317,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 	struct vfio_group *group = dev->group;
 	int ret;
 
-	if (!nb)
+	if (!nb || !vfio_assert_device_open(dev))
 		return -EINVAL;
 
 	switch (type) {
@@ -2373,7 +2325,7 @@ int vfio_unregister_notifier(struct vfio_device *dev,
 		ret = vfio_unregister_iommu_notifier(group, nb);
 		break;
 	case VFIO_GROUP_NOTIFY:
-		ret = vfio_unregister_group_notifier(group, nb);
+		ret = blocking_notifier_chain_unregister(&group->notifier, nb);
 		break;
 	default:
 		ret = -EINVAL;
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for Make the rest of the VFIO driver interface use vfio_device (rev3)
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
                   ` (8 preceding siblings ...)
  (?)
@ 2022-04-21 21:25 ` Patchwork
  -1 siblings, 0 replies; 80+ messages in thread
From: Patchwork @ 2022-04-21 21:25 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: intel-gfx

== Series Details ==

Series: Make the rest of the VFIO driver interface use vfio_device (rev3)
URL   : https://patchwork.freedesktop.org/series/102606/
State : failure

== Summary ==

Error: patch https://patchwork.freedesktop.org/api/1.0/series/102606/revisions/3/mbox/ not applied
Applying: vfio: Make vfio_(un)register_notifier accept a vfio_device
error: sha1 information is lacking or useless (drivers/gpu/drm/i915/gvt/kvmgt.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 vfio: Make vfio_(un)register_notifier accept a vfio_device
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".



^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
@ 2022-04-22  1:39     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:39 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe
> Sent: Friday, April 22, 2022 12:29 AM
> 
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.

stale comment

> 
> Acked-by: Eric Farman <farman@linux.ibm.com>
> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  | 24 ++++++++++++------------
>  drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>  drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>  drivers/vfio/vfio.c               | 25 +++++++------------------
>  include/linux/vfio.h              |  4 ++--
>  5 files changed, 31 insertions(+), 43 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 0787ba5c301f5e..1cec4f1fdfaced 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -810,8 +810,8 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
> 
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
> &events,
> -				&vgpu->iommu_notifier);
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
> +				     &vgpu->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
>  			ret);
> @@ -819,8 +819,8 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	}
> 
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
> &events,
> -				&vgpu->group_notifier);
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
> +				     &vgpu->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
>  			ret);
> @@ -873,12 +873,12 @@ static int intel_vgpu_open_device(struct
> vfio_device *vfio_dev)
>  	vgpu->vfio_group = NULL;
> 
>  undo_register:
> -	vfio_unregister_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
> -					&vgpu->group_notifier);
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
> +				 &vgpu->group_notifier);
> 
>  undo_iommu:
> -	vfio_unregister_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
> -					&vgpu->iommu_notifier);
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
> +				 &vgpu->iommu_notifier);
>  out:
>  	return ret;
>  }
> @@ -907,13 +907,13 @@ static void __intel_vgpu_release(struct intel_vgpu
> *vgpu)
> 
>  	intel_gvt_release_vgpu(vgpu);
> 
> -	ret = vfio_unregister_notifier(vgpu->vfio_device.dev,
> VFIO_IOMMU_NOTIFY,
> -					&vgpu->iommu_notifier);
> +	ret = vfio_unregister_notifier(&vgpu->vfio_device,
> VFIO_IOMMU_NOTIFY,
> +				       &vgpu->iommu_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
> 
> -	ret = vfio_unregister_notifier(vgpu->vfio_device.dev,
> VFIO_GROUP_NOTIFY,
> -					&vgpu->group_notifier);
> +	ret = vfio_unregister_notifier(&vgpu->vfio_device,
> VFIO_GROUP_NOTIFY,
> +				       &vgpu->group_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for group failed: %d\n", ret);
> 
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c
> b/drivers/s390/cio/vfio_ccw_ops.c
> index c4d60cdbf247bf..b49e2e9db2dc6f 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
> 
>  	private->nb.notifier_call = vfio_ccw_mdev_notifier;
> 
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				     &events, &private->nb);
>  	if (ret)
>  		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
> 
>  out_unregister:
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>  	return ret;
>  }
> 
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct
> vfio_device *vdev)
> 
>  	cp_free(&private->cp);
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>  }
> 
>  static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private
> *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index ee0a3bf8f476ca..bfa7ee6ef532d9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct
> vfio_device *vdev)
>  	matrix_mdev->group_notifier.notifier_call =
> vfio_ap_mdev_group_notifier;
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> 
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>  	if (ret)
>  		return ret;
> 
>  	matrix_mdev->iommu_notifier.notifier_call =
> vfio_ap_mdev_iommu_notifier;
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>  	if (ret)
>  		goto out_unregister_group;
>  	return 0;
> 
>  out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	return ret;
>  }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct
> vfio_device *vdev)
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
> 
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	vfio_ap_mdev_unset_kvm(matrix_mdev);
>  }
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct
> vfio_group *group,
>  	return ret;
>  }
> 
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type
> type,
>  			   unsigned long *events, struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
> 
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>  		return -EINVAL;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev,
> enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_register_notifier);
> 
> -int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>  			     struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
> 
> -	if (!dev || !nb)
> +	if (!nb)
>  		return -EINVAL;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev,
> enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>  /* events for VFIO_GROUP_NOTIFY */
>  #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
> 
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>  				  enum vfio_notify_type type,
>  				  unsigned long *required_events,
>  				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>  				    enum vfio_notify_type type,
>  				    struct notifier_block *nb);
> 
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-22  1:39     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:39 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe
> Sent: Friday, April 22, 2022 12:29 AM
> 
> All callers have a struct vfio_device trivially available, pass it in
> directly and avoid calling the expensive vfio_group_get_from_dev().
> 
> To support the unconverted kvmgt mdev driver add
> mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> vfio_mdev.c puts in the drv_data.

stale comment

> 
> Acked-by: Eric Farman <farman@linux.ibm.com>
> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/kvmgt.c  | 24 ++++++++++++------------
>  drivers/s390/cio/vfio_ccw_ops.c   |  7 +++----
>  drivers/s390/crypto/vfio_ap_ops.c | 14 +++++++-------
>  drivers/vfio/vfio.c               | 25 +++++++------------------
>  include/linux/vfio.h              |  4 ++--
>  5 files changed, 31 insertions(+), 43 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 0787ba5c301f5e..1cec4f1fdfaced 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -810,8 +810,8 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
> 
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
> &events,
> -				&vgpu->iommu_notifier);
> +	ret = vfio_register_notifier(vfio_dev, VFIO_IOMMU_NOTIFY, &events,
> +				     &vgpu->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
>  			ret);
> @@ -819,8 +819,8 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	}
> 
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> -	ret = vfio_register_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
> &events,
> -				&vgpu->group_notifier);
> +	ret = vfio_register_notifier(vfio_dev, VFIO_GROUP_NOTIFY, &events,
> +				     &vgpu->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
>  			ret);
> @@ -873,12 +873,12 @@ static int intel_vgpu_open_device(struct
> vfio_device *vfio_dev)
>  	vgpu->vfio_group = NULL;
> 
>  undo_register:
> -	vfio_unregister_notifier(vfio_dev->dev, VFIO_GROUP_NOTIFY,
> -					&vgpu->group_notifier);
> +	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
> +				 &vgpu->group_notifier);
> 
>  undo_iommu:
> -	vfio_unregister_notifier(vfio_dev->dev, VFIO_IOMMU_NOTIFY,
> -					&vgpu->iommu_notifier);
> +	vfio_unregister_notifier(vfio_dev, VFIO_IOMMU_NOTIFY,
> +				 &vgpu->iommu_notifier);
>  out:
>  	return ret;
>  }
> @@ -907,13 +907,13 @@ static void __intel_vgpu_release(struct intel_vgpu
> *vgpu)
> 
>  	intel_gvt_release_vgpu(vgpu);
> 
> -	ret = vfio_unregister_notifier(vgpu->vfio_device.dev,
> VFIO_IOMMU_NOTIFY,
> -					&vgpu->iommu_notifier);
> +	ret = vfio_unregister_notifier(&vgpu->vfio_device,
> VFIO_IOMMU_NOTIFY,
> +				       &vgpu->iommu_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for iommu failed: %d\n", ret);
> 
> -	ret = vfio_unregister_notifier(vgpu->vfio_device.dev,
> VFIO_GROUP_NOTIFY,
> -					&vgpu->group_notifier);
> +	ret = vfio_unregister_notifier(&vgpu->vfio_device,
> VFIO_GROUP_NOTIFY,
> +				       &vgpu->group_notifier);
>  	drm_WARN(&i915->drm, ret,
>  		 "vfio_unregister_notifier for group failed: %d\n", ret);
> 
> diff --git a/drivers/s390/cio/vfio_ccw_ops.c
> b/drivers/s390/cio/vfio_ccw_ops.c
> index c4d60cdbf247bf..b49e2e9db2dc6f 100644
> --- a/drivers/s390/cio/vfio_ccw_ops.c
> +++ b/drivers/s390/cio/vfio_ccw_ops.c
> @@ -183,7 +183,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
> 
>  	private->nb.notifier_call = vfio_ccw_mdev_notifier;
> 
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				     &events, &private->nb);
>  	if (ret)
>  		return ret;
> @@ -204,8 +204,7 @@ static int vfio_ccw_mdev_open_device(struct
> vfio_device *vdev)
> 
>  out_unregister:
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				 &private->nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>  	return ret;
>  }
> 
> @@ -223,7 +222,7 @@ static void vfio_ccw_mdev_close_device(struct
> vfio_device *vdev)
> 
>  	cp_free(&private->cp);
>  	vfio_ccw_unregister_dev_regions(private);
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY, &private-
> >nb);
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY, &private->nb);
>  }
> 
>  static ssize_t vfio_ccw_mdev_read_io_region(struct vfio_ccw_private
> *private,
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index ee0a3bf8f476ca..bfa7ee6ef532d9 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1406,21 +1406,21 @@ static int vfio_ap_mdev_open_device(struct
> vfio_device *vdev)
>  	matrix_mdev->group_notifier.notifier_call =
> vfio_ap_mdev_group_notifier;
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> 
> -	ret = vfio_register_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> -				     &events, &matrix_mdev->group_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_GROUP_NOTIFY, &events,
> +				     &matrix_mdev->group_notifier);
>  	if (ret)
>  		return ret;
> 
>  	matrix_mdev->iommu_notifier.notifier_call =
> vfio_ap_mdev_iommu_notifier;
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> -	ret = vfio_register_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> -				     &events, &matrix_mdev->iommu_notifier);
> +	ret = vfio_register_notifier(vdev, VFIO_IOMMU_NOTIFY, &events,
> +				     &matrix_mdev->iommu_notifier);
>  	if (ret)
>  		goto out_unregister_group;
>  	return 0;
> 
>  out_unregister_group:
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	return ret;
>  }
> @@ -1430,9 +1430,9 @@ static void vfio_ap_mdev_close_device(struct
> vfio_device *vdev)
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
> 
> -	vfio_unregister_notifier(vdev->dev, VFIO_IOMMU_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_IOMMU_NOTIFY,
>  				 &matrix_mdev->iommu_notifier);
> -	vfio_unregister_notifier(vdev->dev, VFIO_GROUP_NOTIFY,
> +	vfio_unregister_notifier(vdev, VFIO_GROUP_NOTIFY,
>  				 &matrix_mdev->group_notifier);
>  	vfio_ap_mdev_unset_kvm(matrix_mdev);
>  }
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index a4555014bd1e72..8a5c46aa2bef61 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2484,19 +2484,15 @@ static int vfio_unregister_group_notifier(struct
> vfio_group *group,
>  	return ret;
>  }
> 
> -int vfio_register_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_register_notifier(struct vfio_device *dev, enum vfio_notify_type
> type,
>  			   unsigned long *events, struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
> 
> -	if (!dev || !nb || !events || (*events == 0))
> +	if (!nb || !events || (*events == 0))
>  		return -EINVAL;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_register_iommu_notifier(group, events, nb);
> @@ -2507,25 +2503,20 @@ int vfio_register_notifier(struct device *dev,
> enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_register_notifier);
> 
> -int vfio_unregister_notifier(struct device *dev, enum vfio_notify_type type,
> +int vfio_unregister_notifier(struct vfio_device *dev,
> +			     enum vfio_notify_type type,
>  			     struct notifier_block *nb)
>  {
> -	struct vfio_group *group;
> +	struct vfio_group *group = dev->group;
>  	int ret;
> 
> -	if (!dev || !nb)
> +	if (!nb)
>  		return -EINVAL;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
>  	switch (type) {
>  	case VFIO_IOMMU_NOTIFY:
>  		ret = vfio_unregister_iommu_notifier(group, nb);
> @@ -2536,8 +2527,6 @@ int vfio_unregister_notifier(struct device *dev,
> enum vfio_notify_type type,
>  	default:
>  		ret = -EINVAL;
>  	}
> -
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 66dda06ec42d1b..748ec0e0293aea 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -178,11 +178,11 @@ enum vfio_notify_type {
>  /* events for VFIO_GROUP_NOTIFY */
>  #define VFIO_GROUP_NOTIFY_SET_KVM	BIT(0)
> 
> -extern int vfio_register_notifier(struct device *dev,
> +extern int vfio_register_notifier(struct vfio_device *dev,
>  				  enum vfio_notify_type type,
>  				  unsigned long *required_events,
>  				  struct notifier_block *nb);
> -extern int vfio_unregister_notifier(struct device *dev,
> +extern int vfio_unregister_notifier(struct vfio_device *dev,
>  				    enum vfio_notify_type type,
>  				    struct notifier_block *nb);
> 
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-22  1:43     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:43 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Eric Farman <farman@linux.ibm.com>
> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
>  drivers/vfio/vfio.c                           | 40 ++++++-------------
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 39 deletions(-)
> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 784bbeb22adcf5..2f9e6025eb2fe1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn in a VFIO
>  driver::
> 
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
> 
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  				    int npage);
> 
>  These functions call back into the back-end IOMMU module by using the
> pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
> index 7a1cf3091cd647..0c2be9421ab78f 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
> 
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
> 
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index bfa7ee6ef532d9..e8914024f5b1af 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> -				 &q->saved_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn,
> 1);
>  		q->saved_pfn = 0;
>  	}
>  }
> @@ -258,7 +257,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
> 
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +300,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new configuration
> */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
> 
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int
> npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> 
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
> 
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
> 
>  	vfio_group_try_dissolve_container(group);
> 
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
> 
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
> 
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
> 
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
> 
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
> 
> -extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> 
>  extern int vfio_group_pin_pages(struct vfio_group *group,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-22  1:43     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:43 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Eric Farman <farman@linux.ibm.com>
> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
>  drivers/vfio/vfio.c                           | 40 ++++++-------------
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 39 deletions(-)
> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 784bbeb22adcf5..2f9e6025eb2fe1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn in a VFIO
>  driver::
> 
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
> 
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  				    int npage);
> 
>  These functions call back into the back-end IOMMU module by using the
> pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
> index 7a1cf3091cd647..0c2be9421ab78f 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
> 
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
> 
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index bfa7ee6ef532d9..e8914024f5b1af 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> -				 &q->saved_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn,
> 1);
>  		q->saved_pfn = 0;
>  	}
>  }
> @@ -258,7 +257,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
> 
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +300,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new configuration
> */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
> 
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int
> npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> 
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
> 
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
> 
>  	vfio_group_try_dissolve_container(group);
> 
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
> 
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
> 
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
> 
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
> 
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
> 
> -extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> 
>  extern int vfio_group_pin_pages(struct vfio_group *group,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-22  1:43     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:43 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Eric Farman <farman@linux.ibm.com>
> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com>
> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  .../driver-api/vfio-mediated-device.rst       |  4 +-
>  drivers/s390/cio/vfio_ccw_cp.c                |  6 +--
>  drivers/s390/crypto/vfio_ap_ops.c             |  9 ++---
>  drivers/vfio/vfio.c                           | 40 ++++++-------------
>  include/linux/vfio.h                          |  4 +-
>  5 files changed, 24 insertions(+), 39 deletions(-)
> 
> diff --git a/Documentation/driver-api/vfio-mediated-device.rst
> b/Documentation/driver-api/vfio-mediated-device.rst
> index 784bbeb22adcf5..2f9e6025eb2fe1 100644
> --- a/Documentation/driver-api/vfio-mediated-device.rst
> +++ b/Documentation/driver-api/vfio-mediated-device.rst
> @@ -262,10 +262,10 @@ Translation APIs for Mediated Devices
>  The following APIs are provided for translating user pfn to host pfn in a VFIO
>  driver::
> 
> -	extern int vfio_pin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  				  int npage, int prot, unsigned long
> *phys_pfn);
> 
> -	extern int vfio_unpin_pages(struct device *dev, unsigned long
> *user_pfn,
> +	int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  				    int npage);
> 
>  These functions call back into the back-end IOMMU module by using the
> pin_pages
> diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
> index 7a1cf3091cd647..0c2be9421ab78f 100644
> --- a/drivers/s390/cio/vfio_ccw_cp.c
> +++ b/drivers/s390/cio/vfio_ccw_cp.c
> @@ -103,13 +103,13 @@ static int pfn_array_pin(struct pfn_array *pa,
> struct vfio_device *vdev)
>  {
>  	int ret = 0;
> 
> -	ret = vfio_pin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr,
> +	ret = vfio_pin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr,
>  			     IOMMU_READ | IOMMU_WRITE, pa->pa_pfn);
> 
>  	if (ret < 0) {
>  		goto err_out;
>  	} else if (ret > 0 && ret != pa->pa_nr) {
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, ret);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, ret);
>  		ret = -EINVAL;
>  		goto err_out;
>  	}
> @@ -127,7 +127,7 @@ static void pfn_array_unpin_free(struct pfn_array
> *pa, struct vfio_device *vdev)
>  {
>  	/* Only unpin if any pages were pinned to begin with */
>  	if (pa->pa_nr)
> -		vfio_unpin_pages(vdev->dev, pa->pa_iova_pfn, pa->pa_nr);
> +		vfio_unpin_pages(vdev, pa->pa_iova_pfn, pa->pa_nr);
>  	pa->pa_nr = 0;
>  	kfree(pa->pa_iova_pfn);
>  }
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index bfa7ee6ef532d9..e8914024f5b1af 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -124,8 +124,7 @@ static void vfio_ap_free_aqic_resources(struct
> vfio_ap_queue *q)
>  		q->saved_isc = VFIO_AP_ISC_INVALID;
>  	}
>  	if (q->saved_pfn && !WARN_ON(!q->matrix_mdev)) {
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> -				 &q->saved_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &q->saved_pfn,
> 1);
>  		q->saved_pfn = 0;
>  	}
>  }
> @@ -258,7 +257,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		return status;
>  	}
> 
> -	ret = vfio_pin_pages(mdev_dev(q->matrix_mdev->mdev), &g_pfn, 1,
> +	ret = vfio_pin_pages(&q->matrix_mdev->vdev, &g_pfn, 1,
>  			     IOMMU_READ | IOMMU_WRITE, &h_pfn);
>  	switch (ret) {
>  	case 1:
> @@ -301,7 +300,7 @@ static struct ap_queue_status
> vfio_ap_irq_enable(struct vfio_ap_queue *q,
>  		break;
>  	case AP_RESPONSE_OTHERWISE_CHANGED:
>  		/* We could not modify IRQ setings: clear new configuration
> */
> -		vfio_unpin_pages(mdev_dev(q->matrix_mdev->mdev),
> &g_pfn, 1);
> +		vfio_unpin_pages(&q->matrix_mdev->vdev, &g_pfn, 1);
>  		kvm_s390_gisc_unregister(kvm, isc);
>  		break;
>  	default:
> @@ -1250,7 +1249,7 @@ static int vfio_ap_mdev_iommu_notifier(struct
> notifier_block *nb,
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
>  		unsigned long g_pfn = unmap->iova >> PAGE_SHIFT;
> 
> -		vfio_unpin_pages(mdev_dev(matrix_mdev->mdev), &g_pfn,
> 1);
> +		vfio_unpin_pages(&matrix_mdev->vdev, &g_pfn, 1);
>  		return NOTIFY_OK;
>  	}
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 8a5c46aa2bef61..24b92a45cfc8f1 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2142,32 +2142,26 @@
> EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>   * @phys_pfn[out]: array of host PFNs
>   * Return error or number of pages pinned.
>   */
> -int vfio_pin_pages(struct device *dev, unsigned long *user_pfn, int npage,
> +int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn, int
> npage,
>  		   int prot, unsigned long *phys_pfn)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
> +	struct vfio_group *group = vdev->group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !phys_pfn || !npage)
> +	if (!user_pfn || !phys_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	if (group->dev_counter > 1) {
> -		ret = -EINVAL;
> -		goto err_pin_pages;
> -	}
> +	if (group->dev_counter > 1)
> +		return -EINVAL;
> 
>  	ret = vfio_group_add_container_user(group);
>  	if (ret)
> -		goto err_pin_pages;
> +		return ret;
> 
>  	container = group->container;
>  	driver = container->iommu_driver;
> @@ -2180,8 +2174,6 @@ int vfio_pin_pages(struct device *dev, unsigned
> long *user_pfn, int npage,
> 
>  	vfio_group_try_dissolve_container(group);
> 
> -err_pin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_pin_pages);
> @@ -2195,28 +2187,24 @@ EXPORT_SYMBOL(vfio_pin_pages);
>   *                 be greater than VFIO_PIN_PAGES_MAX_ENTRIES.
>   * Return error or number of pages unpinned.
>   */
> -int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn, int npage)
> +int vfio_unpin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
> +		     int npage)
>  {
>  	struct vfio_container *container;
> -	struct vfio_group *group;
>  	struct vfio_iommu_driver *driver;
>  	int ret;
> 
> -	if (!dev || !user_pfn || !npage)
> +	if (!user_pfn || !npage)
>  		return -EINVAL;
> 
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> 
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return -ENODEV;
> -
> -	ret = vfio_group_add_container_user(group);
> +	ret = vfio_group_add_container_user(vdev->group);
>  	if (ret)
> -		goto err_unpin_pages;
> +		return ret;
> 
> -	container = group->container;
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
>  	if (likely(driver && driver->ops->unpin_pages))
>  		ret = driver->ops->unpin_pages(container->iommu_data,
> user_pfn,
> @@ -2224,10 +2212,8 @@ int vfio_unpin_pages(struct device *dev,
> unsigned long *user_pfn, int npage)
>  	else
>  		ret = -ENOTTY;
> 
> -	vfio_group_try_dissolve_container(group);
> +	vfio_group_try_dissolve_container(vdev->group);
> 
> -err_unpin_pages:
> -	vfio_group_put(group);
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 748ec0e0293aea..8f2a09801a660b 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -150,9 +150,9 @@ extern long vfio_external_check_extension(struct
> vfio_group *group,
> 
>  #define VFIO_PIN_PAGES_MAX_ENTRIES	(PAGE_SIZE/sizeof(unsigned
> long))
> 
> -extern int vfio_pin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_pin_pages(struct vfio_device *vdev, unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
> -extern int vfio_unpin_pages(struct device *dev, unsigned long *user_pfn,
> +extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> 
>  extern int vfio_group_pin_pages(struct vfio_group *group,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-22  1:50     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:50 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.

Using the same description as last patch leaves the impression that
the two patches do the exactly same type of change. But this
patch actually includes one more change to grab a reference on the
container. This is worth an explanation.

> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

otherwise:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
>  drivers/vfio/vfio.c            | 24 +++++++++++-------------
>  include/linux/vfio.h           |  2 +-
>  3 files changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 03ecffc2ba56a9..5a28ee965b7f3e 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu
> *vgpu, unsigned long gpa,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
>  }
> 
>  /**
> @@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu
> *vgpu,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
>  }
> 
>  void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 24b92a45cfc8f1..d10d20d393b706 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
>   * As the read/write of user space memory is conducted via the CPUs and is
>   * not a real device DMA, it is not necessary to pin the user space memory.
>   *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> + * @vdev [in]		: VFIO device
>   * @user_iova [in]	: base IOVA of a user space buffer
>   * @data [in]		: pointer to kernel buffer
>   * @len [in]		: kernel buffer length
>   * @write		: indicate read or write
>   * Return error code on failure or 0 on success.
>   */
> -int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> -		void *data, size_t len, bool write)
> +int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void
> *data,
> +		size_t len, bool write)
>  {
>  	struct vfio_container *container;
>  	struct vfio_iommu_driver *driver;
>  	int ret = 0;
> 
> -	if (!group || !data || len <= 0)
> +	if (!data || len <= 0)
>  		return -EINVAL;
> 
> -	container = group->container;
> +	ret = vfio_group_add_container_user(vdev->group);
> +	if (ret)
> +		return ret;
> +
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
> 
>  	if (likely(driver && driver->ops->dma_rw))
> @@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group,
> dma_addr_t user_iova,
>  	else
>  		ret = -ENOTTY;
> 
> +	vfio_group_try_dissolve_container(vdev->group);
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_dma_rw);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 8f2a09801a660b..91d46e532ca104 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group
> *group,
>  extern int vfio_group_unpin_pages(struct vfio_group *group,
>  				  unsigned long *user_iova_pfn, int npage);
> 
> -extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> +extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
>  extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-22  1:50     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:50 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.

Using the same description as last patch leaves the impression that
the two patches do the exactly same type of change. But this
patch actually includes one more change to grab a reference on the
container. This is worth an explanation.

> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

otherwise:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
>  drivers/vfio/vfio.c            | 24 +++++++++++-------------
>  include/linux/vfio.h           |  2 +-
>  3 files changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 03ecffc2ba56a9..5a28ee965b7f3e 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu
> *vgpu, unsigned long gpa,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
>  }
> 
>  /**
> @@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu
> *vgpu,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
>  }
> 
>  void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 24b92a45cfc8f1..d10d20d393b706 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
>   * As the read/write of user space memory is conducted via the CPUs and is
>   * not a real device DMA, it is not necessary to pin the user space memory.
>   *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> + * @vdev [in]		: VFIO device
>   * @user_iova [in]	: base IOVA of a user space buffer
>   * @data [in]		: pointer to kernel buffer
>   * @len [in]		: kernel buffer length
>   * @write		: indicate read or write
>   * Return error code on failure or 0 on success.
>   */
> -int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> -		void *data, size_t len, bool write)
> +int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void
> *data,
> +		size_t len, bool write)
>  {
>  	struct vfio_container *container;
>  	struct vfio_iommu_driver *driver;
>  	int ret = 0;
> 
> -	if (!group || !data || len <= 0)
> +	if (!data || len <= 0)
>  		return -EINVAL;
> 
> -	container = group->container;
> +	ret = vfio_group_add_container_user(vdev->group);
> +	if (ret)
> +		return ret;
> +
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
> 
>  	if (likely(driver && driver->ops->dma_rw))
> @@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group,
> dma_addr_t user_iova,
>  	else
>  		ret = -ENOTTY;
> 
> +	vfio_group_try_dissolve_container(vdev->group);
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_dma_rw);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 8f2a09801a660b..91d46e532ca104 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group
> *group,
>  extern int vfio_group_unpin_pages(struct vfio_group *group,
>  				  unsigned long *user_iova_pfn, int npage);
> 
> -extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> +extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
>  extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-22  1:50     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:50 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. The struct vfio_device already
> contains the group we need so this avoids complexity, extra refcountings,
> and a confusing lifecycle model.

Using the same description as last patch leaves the impression that
the two patches do the exactly same type of change. But this
patch actually includes one more change to grab a reference on the
container. This is worth an explanation.

> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

otherwise:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h |  4 ++--
>  drivers/vfio/vfio.c            | 24 +++++++++++-------------
>  include/linux/vfio.h           |  2 +-
>  3 files changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 03ecffc2ba56a9..5a28ee965b7f3e 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -732,7 +732,7 @@ static inline int intel_gvt_read_gpa(struct intel_vgpu
> *vgpu, unsigned long gpa,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, false);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);
>  }
> 
>  /**
> @@ -750,7 +750,7 @@ static inline int intel_gvt_write_gpa(struct intel_vgpu
> *vgpu,
>  {
>  	if (!vgpu->attached)
>  		return -ESRCH;
> -	return vfio_dma_rw(vgpu->vfio_group, gpa, buf, len, true);
> +	return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);
>  }
> 
>  void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index 24b92a45cfc8f1..d10d20d393b706 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -2323,32 +2323,28 @@ EXPORT_SYMBOL(vfio_group_unpin_pages);
>   * As the read/write of user space memory is conducted via the CPUs and is
>   * not a real device DMA, it is not necessary to pin the user space memory.
>   *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> + * @vdev [in]		: VFIO device
>   * @user_iova [in]	: base IOVA of a user space buffer
>   * @data [in]		: pointer to kernel buffer
>   * @len [in]		: kernel buffer length
>   * @write		: indicate read or write
>   * Return error code on failure or 0 on success.
>   */
> -int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> -		void *data, size_t len, bool write)
> +int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova, void
> *data,
> +		size_t len, bool write)
>  {
>  	struct vfio_container *container;
>  	struct vfio_iommu_driver *driver;
>  	int ret = 0;
> 
> -	if (!group || !data || len <= 0)
> +	if (!data || len <= 0)
>  		return -EINVAL;
> 
> -	container = group->container;
> +	ret = vfio_group_add_container_user(vdev->group);
> +	if (ret)
> +		return ret;
> +
> +	container = vdev->group->container;
>  	driver = container->iommu_driver;
> 
>  	if (likely(driver && driver->ops->dma_rw))
> @@ -2357,6 +2353,8 @@ int vfio_dma_rw(struct vfio_group *group,
> dma_addr_t user_iova,
>  	else
>  		ret = -ENOTTY;
> 
> +	vfio_group_try_dissolve_container(vdev->group);
> +
>  	return ret;
>  }
>  EXPORT_SYMBOL(vfio_dma_rw);
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 8f2a09801a660b..91d46e532ca104 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -161,7 +161,7 @@ extern int vfio_group_pin_pages(struct vfio_group
> *group,
>  extern int vfio_group_unpin_pages(struct vfio_group *group,
>  				  unsigned long *user_iova_pfn, int npage);
> 
> -extern int vfio_dma_rw(struct vfio_group *group, dma_addr_t user_iova,
> +extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
>  extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-22  1:51     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> reason to use a group interface here, kvmgt has easy access to a
> vfio_device.
> 
> Delete kvmgt_vdev::vfio_group since these calls were the last users.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
>  2 files changed, 6 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 5a28ee965b7f3e..2af4c83e733c6c 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -231,7 +231,6 @@ struct intel_vgpu {
>  	struct kvm *kvm;
>  	struct work_struct release_work;
>  	atomic_t released;
> -	struct vfio_group *vfio_group;
> 
>  	struct kvm_page_track_notifier_node track_node;
>  #define NR_BKT (1 << 18)
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 1cec4f1fdfaced..7655ffa97d5116 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu
> *vgpu, unsigned long gfn,
>  	for (npage = 0; npage < total_pages; npage++) {
>  		unsigned long cur_gfn = gfn + npage;
> 
> -		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn,
> 1);
> +		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
>  		drm_WARN_ON(&i915->drm, ret != 1);
>  	}
>  }
> @@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu,
> unsigned long gfn,
>  		unsigned long cur_gfn = gfn + npage;
>  		unsigned long pfn;
> 
> -		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
> -					   IOMMU_READ | IOMMU_WRITE,
> &pfn);
> +		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
> +				     IOMMU_READ | IOMMU_WRITE, &pfn);
>  		if (ret != 1) {
>  			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx,
> ret %d\n",
>  				     cur_gfn, ret);
> @@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>  	unsigned long events;
>  	int ret;
> -	struct vfio_group *vfio_group;
> 
>  	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
>  	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
> @@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct
> vfio_device *vfio_dev)
>  		goto undo_iommu;
>  	}
> 
> -	vfio_group =
> -		vfio_group_get_external_user_from_dev(vgpu-
> >vfio_device.dev);
> -	if (IS_ERR_OR_NULL(vfio_group)) {
> -		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
> -		gvt_vgpu_err("vfio_group_get_external_user_from_dev
> failed\n");
> -		goto undo_register;
> -	}
> -	vgpu->vfio_group = vfio_group;
> -
>  	ret = -EEXIST;
>  	if (vgpu->attached)
> -		goto undo_group;
> +		goto undo_register;
> 
>  	ret = -ESRCH;
>  	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
>  		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
> -		goto undo_group;
> +		goto undo_register;
>  	}
> 
>  	ret = -EEXIST;
>  	if (__kvmgt_vgpu_exist(vgpu))
> -		goto undo_group;
> +		goto undo_register;
> 
>  	vgpu->attached = true;
>  	kvm_get_kvm(vgpu->kvm);
> @@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	atomic_set(&vgpu->released, 0);
>  	return 0;
> 
> -undo_group:
> -	vfio_group_put_external_user(vgpu->vfio_group);
> -	vgpu->vfio_group = NULL;
> -
>  undo_register:
>  	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  				 &vgpu->group_notifier);
> @@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu
> *vgpu)
>  	gvt_cache_destroy(vgpu);
> 
>  	intel_vgpu_release_msi_eventfd_ctx(vgpu);
> -	vfio_group_put_external_user(vgpu->vfio_group);
> 
>  	vgpu->kvm = NULL;
>  	vgpu->attached = false;
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-22  1:51     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> reason to use a group interface here, kvmgt has easy access to a
> vfio_device.
> 
> Delete kvmgt_vdev::vfio_group since these calls were the last users.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
>  2 files changed, 6 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 5a28ee965b7f3e..2af4c83e733c6c 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -231,7 +231,6 @@ struct intel_vgpu {
>  	struct kvm *kvm;
>  	struct work_struct release_work;
>  	atomic_t released;
> -	struct vfio_group *vfio_group;
> 
>  	struct kvm_page_track_notifier_node track_node;
>  #define NR_BKT (1 << 18)
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 1cec4f1fdfaced..7655ffa97d5116 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu
> *vgpu, unsigned long gfn,
>  	for (npage = 0; npage < total_pages; npage++) {
>  		unsigned long cur_gfn = gfn + npage;
> 
> -		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn,
> 1);
> +		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
>  		drm_WARN_ON(&i915->drm, ret != 1);
>  	}
>  }
> @@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu,
> unsigned long gfn,
>  		unsigned long cur_gfn = gfn + npage;
>  		unsigned long pfn;
> 
> -		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
> -					   IOMMU_READ | IOMMU_WRITE,
> &pfn);
> +		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
> +				     IOMMU_READ | IOMMU_WRITE, &pfn);
>  		if (ret != 1) {
>  			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx,
> ret %d\n",
>  				     cur_gfn, ret);
> @@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>  	unsigned long events;
>  	int ret;
> -	struct vfio_group *vfio_group;
> 
>  	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
>  	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
> @@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct
> vfio_device *vfio_dev)
>  		goto undo_iommu;
>  	}
> 
> -	vfio_group =
> -		vfio_group_get_external_user_from_dev(vgpu-
> >vfio_device.dev);
> -	if (IS_ERR_OR_NULL(vfio_group)) {
> -		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
> -		gvt_vgpu_err("vfio_group_get_external_user_from_dev
> failed\n");
> -		goto undo_register;
> -	}
> -	vgpu->vfio_group = vfio_group;
> -
>  	ret = -EEXIST;
>  	if (vgpu->attached)
> -		goto undo_group;
> +		goto undo_register;
> 
>  	ret = -ESRCH;
>  	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
>  		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
> -		goto undo_group;
> +		goto undo_register;
>  	}
> 
>  	ret = -EEXIST;
>  	if (__kvmgt_vgpu_exist(vgpu))
> -		goto undo_group;
> +		goto undo_register;
> 
>  	vgpu->attached = true;
>  	kvm_get_kvm(vgpu->kvm);
> @@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	atomic_set(&vgpu->released, 0);
>  	return 0;
> 
> -undo_group:
> -	vfio_group_put_external_user(vgpu->vfio_group);
> -	vgpu->vfio_group = NULL;
> -
>  undo_register:
>  	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  				 &vgpu->group_notifier);
> @@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu
> *vgpu)
>  	gvt_cache_destroy(vgpu);
> 
>  	intel_vgpu_release_msi_eventfd_ctx(vgpu);
> -	vfio_group_put_external_user(vgpu->vfio_group);
> 
>  	vgpu->kvm = NULL;
>  	vgpu->attached = false;
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-22  1:51     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:51 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Use the existing vfio_device versions of vfio_(un)pin_pages(). There is no
> reason to use a group interface here, kvmgt has easy access to a
> vfio_device.
> 
> Delete kvmgt_vdev::vfio_group since these calls were the last users.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/gpu/drm/i915/gvt/gvt.h   |  1 -
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 27 ++++++---------------------
>  2 files changed, 6 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 5a28ee965b7f3e..2af4c83e733c6c 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -231,7 +231,6 @@ struct intel_vgpu {
>  	struct kvm *kvm;
>  	struct work_struct release_work;
>  	atomic_t released;
> -	struct vfio_group *vfio_group;
> 
>  	struct kvm_page_track_notifier_node track_node;
>  #define NR_BKT (1 << 18)
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 1cec4f1fdfaced..7655ffa97d5116 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -243,7 +243,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu
> *vgpu, unsigned long gfn,
>  	for (npage = 0; npage < total_pages; npage++) {
>  		unsigned long cur_gfn = gfn + npage;
> 
> -		ret = vfio_group_unpin_pages(vgpu->vfio_group, &cur_gfn,
> 1);
> +		ret = vfio_unpin_pages(&vgpu->vfio_device, &cur_gfn, 1);
>  		drm_WARN_ON(&i915->drm, ret != 1);
>  	}
>  }
> @@ -266,8 +266,8 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu,
> unsigned long gfn,
>  		unsigned long cur_gfn = gfn + npage;
>  		unsigned long pfn;
> 
> -		ret = vfio_group_pin_pages(vgpu->vfio_group, &cur_gfn, 1,
> -					   IOMMU_READ | IOMMU_WRITE,
> &pfn);
> +		ret = vfio_pin_pages(&vgpu->vfio_device, &cur_gfn, 1,
> +				     IOMMU_READ | IOMMU_WRITE, &pfn);
>  		if (ret != 1) {
>  			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx,
> ret %d\n",
>  				     cur_gfn, ret);
> @@ -804,7 +804,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>  	unsigned long events;
>  	int ret;
> -	struct vfio_group *vfio_group;
> 
>  	vgpu->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
>  	vgpu->group_notifier.notifier_call = intel_vgpu_group_notifier;
> @@ -827,28 +826,19 @@ static int intel_vgpu_open_device(struct
> vfio_device *vfio_dev)
>  		goto undo_iommu;
>  	}
> 
> -	vfio_group =
> -		vfio_group_get_external_user_from_dev(vgpu-
> >vfio_device.dev);
> -	if (IS_ERR_OR_NULL(vfio_group)) {
> -		ret = !vfio_group ? -EFAULT : PTR_ERR(vfio_group);
> -		gvt_vgpu_err("vfio_group_get_external_user_from_dev
> failed\n");
> -		goto undo_register;
> -	}
> -	vgpu->vfio_group = vfio_group;
> -
>  	ret = -EEXIST;
>  	if (vgpu->attached)
> -		goto undo_group;
> +		goto undo_register;
> 
>  	ret = -ESRCH;
>  	if (!vgpu->kvm || vgpu->kvm->mm != current->mm) {
>  		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
> -		goto undo_group;
> +		goto undo_register;
>  	}
> 
>  	ret = -EEXIST;
>  	if (__kvmgt_vgpu_exist(vgpu))
> -		goto undo_group;
> +		goto undo_register;
> 
>  	vgpu->attached = true;
>  	kvm_get_kvm(vgpu->kvm);
> @@ -868,10 +858,6 @@ static int intel_vgpu_open_device(struct vfio_device
> *vfio_dev)
>  	atomic_set(&vgpu->released, 0);
>  	return 0;
> 
> -undo_group:
> -	vfio_group_put_external_user(vgpu->vfio_group);
> -	vgpu->vfio_group = NULL;
> -
>  undo_register:
>  	vfio_unregister_notifier(vfio_dev, VFIO_GROUP_NOTIFY,
>  				 &vgpu->group_notifier);
> @@ -925,7 +911,6 @@ static void __intel_vgpu_release(struct intel_vgpu
> *vgpu)
>  	gvt_cache_destroy(vgpu);
> 
>  	intel_vgpu_release_msi_eventfd_ctx(vgpu);
> -	vfio_group_put_external_user(vgpu->vfio_group);
> 
>  	vgpu->kvm = NULL;
>  	vgpu->attached = false;
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 6/7] vfio: Remove dead code
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-22  1:52     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:52 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Now that callers have been updated to use the vfio_device APIs the driver
> facing group interface is no longer used, delete it:
> 
> - vfio_group_get_external_user_from_dev()
> - vfio_group_pin_pages()
> - vfio_group_unpin_pages()
> - vfio_group_iommu_domain()
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/vfio/vfio.c  | 151 -------------------------------------------
>  include/linux/vfio.h |  11 ----
>  2 files changed, 162 deletions(-)
> 
> vfio_group_put_external_user() is removable too when combined with the
> KVM series
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index d10d20d393b706..ba6fae95555ec7 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -1947,44 +1947,6 @@ struct vfio_group
> *vfio_group_get_external_user(struct file *filep)
>  }
>  EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
> 
> -/*
> - * External user API, exported by symbols to be linked dynamically.
> - * The external user passes in a device pointer
> - * to verify that:
> - *	- A VFIO group is assiciated with the device;
> - *	- IOMMU is set for the group.
> - * If both checks passed, vfio_group_get_external_user_from_dev()
> - * increments the container user counter to prevent the VFIO group
> - * from disposal before external user exits and returns the pointer
> - * to the VFIO group.
> - *
> - * When the external user finishes using the VFIO group, it calls
> - * vfio_group_put_external_user() to release the VFIO group and
> - * decrement the container user counter.
> - *
> - * @dev [in]	: device
> - * Return error PTR or pointer to VFIO group.
> - */
> -
> -struct vfio_group *vfio_group_get_external_user_from_dev(struct device
> *dev)
> -{
> -	struct vfio_group *group;
> -	int ret;
> -
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return ERR_PTR(-ENODEV);
> -
> -	ret = vfio_group_add_container_user(group);
> -	if (ret) {
> -		vfio_group_put(group);
> -		return ERR_PTR(ret);
> -	}
> -
> -	return group;
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
> -
>  void vfio_group_put_external_user(struct vfio_group *group)
>  {
>  	vfio_group_try_dissolve_container(group);
> @@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> 
> -/*
> - * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
> - * VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * @prot [in]		: protection flags
> - * @phys_pfn [out]	: array of host PFNs
> - * Return error or number of pages pinned.
> - */
> -int vfio_group_pin_pages(struct vfio_group *group,
> -			 unsigned long *user_iova_pfn, int npage,
> -			 int prot, unsigned long *phys_pfn)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !phys_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (group->dev_counter > 1)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data,
> -					     group->iommu_group,
> user_iova_pfn,
> -					     npage, prot, phys_pfn);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_pin_pages);
> -
> -/*
> - * Unpin a set of guest IOVA PFNs for a VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: vfio group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be
> unpinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater than
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * Return error or number of pages unpinned.
> - */
> -int vfio_group_unpin_pages(struct vfio_group *group,
> -			   unsigned long *user_iova_pfn, int npage)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->unpin_pages))
> -		ret = driver->ops->unpin_pages(container->iommu_data,
> -					       user_iova_pfn, npage);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_unpin_pages);
> -
> -
>  /*
>   * This interface allows the CPUs to perform some sort of virtual DMA on
>   * behalf of the device.
> @@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device
> *dev,
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> 
> -struct iommu_domain *vfio_group_iommu_domain(struct vfio_group
> *group)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -
> -	if (!group)
> -		return ERR_PTR(-EINVAL);
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->group_iommu_domain))
> -		return driver->ops->group_iommu_domain(container-
> >iommu_data,
> -						       group->iommu_group);
> -
> -	return ERR_PTR(-ENOTTY);
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
> -
>  /*
>   * Module/class support
>   */
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 91d46e532ca104..9a9981c2622896 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device
> *device,
>   */
>  extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
>  extern void vfio_group_put_external_user(struct vfio_group *group);
> -extern struct vfio_group *vfio_group_get_external_user_from_dev(struct
> device
> -								*dev);
>  extern bool vfio_external_group_match_file(struct vfio_group *group,
>  					   struct file *filep);
>  extern int vfio_external_user_iommu_id(struct vfio_group *group);
> @@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
>  extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> -
> -extern int vfio_group_pin_pages(struct vfio_group *group,
> -				unsigned long *user_iova_pfn, int npage,
> -				int prot, unsigned long *phys_pfn);
> -extern int vfio_group_unpin_pages(struct vfio_group *group,
> -				  unsigned long *user_iova_pfn, int npage);
> -
>  extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
> -extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> -
>  /* each type has independent events */
>  enum vfio_notify_type {
>  	VFIO_IOMMU_NOTIFY = 0,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 6/7] vfio: Remove dead code
@ 2022-04-22  1:52     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:52 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Now that callers have been updated to use the vfio_device APIs the driver
> facing group interface is no longer used, delete it:
> 
> - vfio_group_get_external_user_from_dev()
> - vfio_group_pin_pages()
> - vfio_group_unpin_pages()
> - vfio_group_iommu_domain()
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/vfio/vfio.c  | 151 -------------------------------------------
>  include/linux/vfio.h |  11 ----
>  2 files changed, 162 deletions(-)
> 
> vfio_group_put_external_user() is removable too when combined with the
> KVM series
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index d10d20d393b706..ba6fae95555ec7 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -1947,44 +1947,6 @@ struct vfio_group
> *vfio_group_get_external_user(struct file *filep)
>  }
>  EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
> 
> -/*
> - * External user API, exported by symbols to be linked dynamically.
> - * The external user passes in a device pointer
> - * to verify that:
> - *	- A VFIO group is assiciated with the device;
> - *	- IOMMU is set for the group.
> - * If both checks passed, vfio_group_get_external_user_from_dev()
> - * increments the container user counter to prevent the VFIO group
> - * from disposal before external user exits and returns the pointer
> - * to the VFIO group.
> - *
> - * When the external user finishes using the VFIO group, it calls
> - * vfio_group_put_external_user() to release the VFIO group and
> - * decrement the container user counter.
> - *
> - * @dev [in]	: device
> - * Return error PTR or pointer to VFIO group.
> - */
> -
> -struct vfio_group *vfio_group_get_external_user_from_dev(struct device
> *dev)
> -{
> -	struct vfio_group *group;
> -	int ret;
> -
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return ERR_PTR(-ENODEV);
> -
> -	ret = vfio_group_add_container_user(group);
> -	if (ret) {
> -		vfio_group_put(group);
> -		return ERR_PTR(ret);
> -	}
> -
> -	return group;
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
> -
>  void vfio_group_put_external_user(struct vfio_group *group)
>  {
>  	vfio_group_try_dissolve_container(group);
> @@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> 
> -/*
> - * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
> - * VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * @prot [in]		: protection flags
> - * @phys_pfn [out]	: array of host PFNs
> - * Return error or number of pages pinned.
> - */
> -int vfio_group_pin_pages(struct vfio_group *group,
> -			 unsigned long *user_iova_pfn, int npage,
> -			 int prot, unsigned long *phys_pfn)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !phys_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (group->dev_counter > 1)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data,
> -					     group->iommu_group,
> user_iova_pfn,
> -					     npage, prot, phys_pfn);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_pin_pages);
> -
> -/*
> - * Unpin a set of guest IOVA PFNs for a VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: vfio group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be
> unpinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater than
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * Return error or number of pages unpinned.
> - */
> -int vfio_group_unpin_pages(struct vfio_group *group,
> -			   unsigned long *user_iova_pfn, int npage)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->unpin_pages))
> -		ret = driver->ops->unpin_pages(container->iommu_data,
> -					       user_iova_pfn, npage);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_unpin_pages);
> -
> -
>  /*
>   * This interface allows the CPUs to perform some sort of virtual DMA on
>   * behalf of the device.
> @@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device
> *dev,
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> 
> -struct iommu_domain *vfio_group_iommu_domain(struct vfio_group
> *group)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -
> -	if (!group)
> -		return ERR_PTR(-EINVAL);
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->group_iommu_domain))
> -		return driver->ops->group_iommu_domain(container-
> >iommu_data,
> -						       group->iommu_group);
> -
> -	return ERR_PTR(-ENOTTY);
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
> -
>  /*
>   * Module/class support
>   */
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 91d46e532ca104..9a9981c2622896 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device
> *device,
>   */
>  extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
>  extern void vfio_group_put_external_user(struct vfio_group *group);
> -extern struct vfio_group *vfio_group_get_external_user_from_dev(struct
> device
> -								*dev);
>  extern bool vfio_external_group_match_file(struct vfio_group *group,
>  					   struct file *filep);
>  extern int vfio_external_user_iommu_id(struct vfio_group *group);
> @@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
>  extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> -
> -extern int vfio_group_pin_pages(struct vfio_group *group,
> -				unsigned long *user_iova_pfn, int npage,
> -				int prot, unsigned long *phys_pfn);
> -extern int vfio_group_unpin_pages(struct vfio_group *group,
> -				  unsigned long *user_iova_pfn, int npage);
> -
>  extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
> -extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> -
>  /* each type has independent events */
>  enum vfio_notify_type {
>  	VFIO_IOMMU_NOTIFY = 0,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 6/7] vfio: Remove dead code
@ 2022-04-22  1:52     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  1:52 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> Now that callers have been updated to use the vfio_device APIs the driver
> facing group interface is no longer used, delete it:
> 
> - vfio_group_get_external_user_from_dev()
> - vfio_group_pin_pages()
> - vfio_group_unpin_pages()
> - vfio_group_iommu_domain()
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  drivers/vfio/vfio.c  | 151 -------------------------------------------
>  include/linux/vfio.h |  11 ----
>  2 files changed, 162 deletions(-)
> 
> vfio_group_put_external_user() is removable too when combined with the
> KVM series
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index d10d20d393b706..ba6fae95555ec7 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -1947,44 +1947,6 @@ struct vfio_group
> *vfio_group_get_external_user(struct file *filep)
>  }
>  EXPORT_SYMBOL_GPL(vfio_group_get_external_user);
> 
> -/*
> - * External user API, exported by symbols to be linked dynamically.
> - * The external user passes in a device pointer
> - * to verify that:
> - *	- A VFIO group is assiciated with the device;
> - *	- IOMMU is set for the group.
> - * If both checks passed, vfio_group_get_external_user_from_dev()
> - * increments the container user counter to prevent the VFIO group
> - * from disposal before external user exits and returns the pointer
> - * to the VFIO group.
> - *
> - * When the external user finishes using the VFIO group, it calls
> - * vfio_group_put_external_user() to release the VFIO group and
> - * decrement the container user counter.
> - *
> - * @dev [in]	: device
> - * Return error PTR or pointer to VFIO group.
> - */
> -
> -struct vfio_group *vfio_group_get_external_user_from_dev(struct device
> *dev)
> -{
> -	struct vfio_group *group;
> -	int ret;
> -
> -	group = vfio_group_get_from_dev(dev);
> -	if (!group)
> -		return ERR_PTR(-ENODEV);
> -
> -	ret = vfio_group_add_container_user(group);
> -	if (ret) {
> -		vfio_group_put(group);
> -		return ERR_PTR(ret);
> -	}
> -
> -	return group;
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_get_external_user_from_dev);
> -
>  void vfio_group_put_external_user(struct vfio_group *group)
>  {
>  	vfio_group_try_dissolve_container(group);
> @@ -2218,101 +2180,6 @@ int vfio_unpin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  }
>  EXPORT_SYMBOL(vfio_unpin_pages);
> 
> -/*
> - * Pin a set of guest IOVA PFNs and return their associated host PFNs for a
> - * VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: VFIO group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be pinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * @prot [in]		: protection flags
> - * @phys_pfn [out]	: array of host PFNs
> - * Return error or number of pages pinned.
> - */
> -int vfio_group_pin_pages(struct vfio_group *group,
> -			 unsigned long *user_iova_pfn, int npage,
> -			 int prot, unsigned long *phys_pfn)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !phys_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (group->dev_counter > 1)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->pin_pages))
> -		ret = driver->ops->pin_pages(container->iommu_data,
> -					     group->iommu_group,
> user_iova_pfn,
> -					     npage, prot, phys_pfn);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_pin_pages);
> -
> -/*
> - * Unpin a set of guest IOVA PFNs for a VFIO group.
> - *
> - * The caller needs to call vfio_group_get_external_user() or
> - * vfio_group_get_external_user_from_dev() prior to calling this interface,
> - * so as to prevent the VFIO group from disposal in the middle of the call.
> - * But it can keep the reference to the VFIO group for several calls into
> - * this interface.
> - * After finishing using of the VFIO group, the caller needs to release the
> - * VFIO group by calling vfio_group_put_external_user().
> - *
> - * @group [in]		: vfio group
> - * @user_iova_pfn [in]	: array of user/guest IOVA PFNs to be
> unpinned.
> - * @npage [in]		: count of elements in user_iova_pfn array.
> - *			  This count should not be greater than
> - *			  VFIO_PIN_PAGES_MAX_ENTRIES.
> - * Return error or number of pages unpinned.
> - */
> -int vfio_group_unpin_pages(struct vfio_group *group,
> -			   unsigned long *user_iova_pfn, int npage)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -	int ret;
> -
> -	if (!group || !user_iova_pfn || !npage)
> -		return -EINVAL;
> -
> -	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
> -		return -E2BIG;
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->unpin_pages))
> -		ret = driver->ops->unpin_pages(container->iommu_data,
> -					       user_iova_pfn, npage);
> -	else
> -		ret = -ENOTTY;
> -
> -	return ret;
> -}
> -EXPORT_SYMBOL(vfio_group_unpin_pages);
> -
> -
>  /*
>   * This interface allows the CPUs to perform some sort of virtual DMA on
>   * behalf of the device.
> @@ -2515,24 +2382,6 @@ int vfio_unregister_notifier(struct vfio_device
> *dev,
>  }
>  EXPORT_SYMBOL(vfio_unregister_notifier);
> 
> -struct iommu_domain *vfio_group_iommu_domain(struct vfio_group
> *group)
> -{
> -	struct vfio_container *container;
> -	struct vfio_iommu_driver *driver;
> -
> -	if (!group)
> -		return ERR_PTR(-EINVAL);
> -
> -	container = group->container;
> -	driver = container->iommu_driver;
> -	if (likely(driver && driver->ops->group_iommu_domain))
> -		return driver->ops->group_iommu_domain(container-
> >iommu_data,
> -						       group->iommu_group);
> -
> -	return ERR_PTR(-ENOTTY);
> -}
> -EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);
> -
>  /*
>   * Module/class support
>   */
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 91d46e532ca104..9a9981c2622896 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -140,8 +140,6 @@ int vfio_mig_get_next_state(struct vfio_device
> *device,
>   */
>  extern struct vfio_group *vfio_group_get_external_user(struct file *filep);
>  extern void vfio_group_put_external_user(struct vfio_group *group);
> -extern struct vfio_group *vfio_group_get_external_user_from_dev(struct
> device
> -								*dev);
>  extern bool vfio_external_group_match_file(struct vfio_group *group,
>  					   struct file *filep);
>  extern int vfio_external_user_iommu_id(struct vfio_group *group);
> @@ -154,18 +152,9 @@ extern int vfio_pin_pages(struct vfio_device *vdev,
> unsigned long *user_pfn,
>  			  int npage, int prot, unsigned long *phys_pfn);
>  extern int vfio_unpin_pages(struct vfio_device *vdev, unsigned long
> *user_pfn,
>  			    int npage);
> -
> -extern int vfio_group_pin_pages(struct vfio_group *group,
> -				unsigned long *user_iova_pfn, int npage,
> -				int prot, unsigned long *phys_pfn);
> -extern int vfio_group_unpin_pages(struct vfio_group *group,
> -				  unsigned long *user_iova_pfn, int npage);
> -
>  extern int vfio_dma_rw(struct vfio_device *vdev, dma_addr_t user_iova,
>  		       void *data, size_t len, bool write);
> 
> -extern struct iommu_domain *vfio_group_iommu_domain(struct
> vfio_group *group);
> -
>  /* each type has independent events */
>  enum vfio_notify_type {
>  	VFIO_IOMMU_NOTIFY = 0,
> --
> 2.36.0


^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-22  2:11     ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  2:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Christoph Hellwig, Liu, Yi L

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>, with one nit

> @@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode
> *inode, struct file *filep)
>  	struct vfio_device *device = filep->private_data;
> 
>  	mutex_lock(&device->dev_set->lock);
> -	if (!--device->open_count && device->ops->close_device)
> +	vfio_assert_device_open(device);
> +	if (device->open_count == 1 && device->ops->close_device)
>  		device->ops->close_device(device);
> +	device->open_count--;
>  	mutex_unlock(&device->dev_set->lock);

Is it necessary to add assertion here? This is the only place to
decrement the counter and no similar assertion in other release()/
put() functions.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-22  2:11     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  2:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>, with one nit

> @@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode
> *inode, struct file *filep)
>  	struct vfio_device *device = filep->private_data;
> 
>  	mutex_lock(&device->dev_set->lock);
> -	if (!--device->open_count && device->ops->close_device)
> +	vfio_assert_device_open(device);
> +	if (device->open_count == 1 && device->ops->close_device)
>  		device->ops->close_device(device);
> +	device->open_count--;
>  	mutex_unlock(&device->dev_set->lock);

Is it necessary to add assertion here? This is the only place to
decrement the counter and no similar assertion in other release()/
put() functions.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-22  2:11     ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-22  2:11 UTC (permalink / raw)
  To: Jason Gunthorpe, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Wang, Zhi A
  Cc: Tony Krowiak, Eric Farman, Liu, Yi L, Christoph Hellwig

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Friday, April 22, 2022 12:29 AM
> 
> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>, with one nit

> @@ -1544,8 +1550,10 @@ static int vfio_device_fops_release(struct inode
> *inode, struct file *filep)
>  	struct vfio_device *device = filep->private_data;
> 
>  	mutex_lock(&device->dev_set->lock);
> -	if (!--device->open_count && device->ops->close_device)
> +	vfio_assert_device_open(device);
> +	if (device->open_count == 1 && device->ops->close_device)
>  		device->ops->close_device(device);
> +	device->open_count--;
>  	mutex_unlock(&device->dev_set->lock);

Is it necessary to add assertion here? This is the only place to
decrement the counter and no similar assertion in other release()/
put() functions.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
@ 2022-04-22  6:21     ` Christoph Hellwig
  -1 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:21 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Tony Krowiak, Eric Farman,
	Christoph Hellwig, Tian, Kevin, Liu, Yi L

Looks good (for now):

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-22  6:21     ` Christoph Hellwig
  0 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:21 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

Looks good (for now):

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
@ 2022-04-22  6:22     ` Christoph Hellwig
  -1 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Tony Krowiak, Eric Farman,
	Christoph Hellwig, Tian, Kevin, Liu, Yi L

Nit: why do some of these patches that don't touch the mdev code
mdev in the subject?

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-22  6:22     ` Christoph Hellwig
  0 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

Nit: why do some of these patches that don't touch the mdev code
mdev in the subject?

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
@ 2022-04-22  6:22     ` Christoph Hellwig
  -1 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Tony Krowiak, Eric Farman,
	Christoph Hellwig, Tian, Kevin, Liu, Yi L

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages
@ 2022-04-22  6:22     ` Christoph Hellwig
  0 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-22  6:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
  2022-04-22  1:39     ` [Intel-gfx] " Tian, Kevin
  (?)
@ 2022-04-22 17:07       ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Wang, Zhi A, Tony Krowiak, Eric Farman, Liu, Yi L,
	Christoph Hellwig

On Fri, Apr 22, 2022 at 01:39:09AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> > 
> > To support the unconverted kvmgt mdev driver add
> > mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> > vfio_mdev.c puts in the drv_data.
> 
> stale comment

woops, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-22 17:07       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Wang, Zhi A,
	Tony Krowiak, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 01:39:09AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> > 
> > To support the unconverted kvmgt mdev driver add
> > mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> > vfio_mdev.c puts in the drv_data.
> 
> stale comment

woops, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device
@ 2022-04-22 17:07       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:07 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 01:39:09AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > All callers have a struct vfio_device trivially available, pass it in
> > directly and avoid calling the expensive vfio_group_get_from_dev().
> > 
> > To support the unconverted kvmgt mdev driver add
> > mdev_legacy_get_vfio_device() which will return the vfio_device pointer
> > vfio_mdev.c puts in the drv_data.
> 
> stale comment

woops, done

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-22  1:50     ` Tian, Kevin
  (?)
@ 2022-04-22 17:59       ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:59 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Wang, Zhi A,
	Tony Krowiak, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> 
> Using the same description as last patch leaves the impression that
> the two patches do the exactly same type of change. But this
> patch actually includes one more change to grab a reference on the
> container. This is worth an explanation.

How about this:

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. Change vfio_dma_rw() to take in the
struct vfio_device and move the container users that would have been held
by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
vfio_pin/unpin_pages().

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-22 17:59       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:59 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> 
> Using the same description as last patch leaves the impression that
> the two patches do the exactly same type of change. But this
> patch actually includes one more change to grab a reference on the
> container. This is worth an explanation.

How about this:

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. Change vfio_dma_rw() to take in the
struct vfio_device and move the container users that would have been held
by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
vfio_pin/unpin_pages().

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-22 17:59       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 17:59 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Wang, Zhi A, Tony Krowiak, Eric Farman,
	Christoph Hellwig, Liu, Yi L

On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Friday, April 22, 2022 12:29 AM
> > 
> > Every caller has a readily available vfio_device pointer, use that instead
> > of passing in a generic struct device. The struct vfio_device already
> > contains the group we need so this avoids complexity, extra refcountings,
> > and a confusing lifecycle model.
> 
> Using the same description as last patch leaves the impression that
> the two patches do the exactly same type of change. But this
> patch actually includes one more change to grab a reference on the
> container. This is worth an explanation.

How about this:

Every caller has a readily available vfio_device pointer, use that instead
of passing in a generic struct device. Change vfio_dma_rw() to take in the
struct vfio_device and move the container users that would have been held
by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
vfio_pin/unpin_pages().

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-22  2:11     ` Tian, Kevin
  (?)
@ 2022-04-22 20:55       ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:55 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Wang, Zhi A,
	Tony Krowiak, Eric Farman, Vasily Gorbik, Heiko Carstens,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 02:11:27AM +0000, Tian, Kevin wrote:

> >  	mutex_lock(&device->dev_set->lock);
> > -	if (!--device->open_count && device->ops->close_device)
> > +	vfio_assert_device_open(device);
> > +	if (device->open_count == 1 && device->ops->close_device)
> >  		device->ops->close_device(device);
> > +	device->open_count--;
> >  	mutex_unlock(&device->dev_set->lock);
> 
> Is it necessary to add assertion here? This is the only place to
> decrement the counter and no similar assertion in other release()/
> put() functions.

Necessary, no, but since we have it we may as well check it here. It
is common to check that refcounts don't underflow.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-22 20:55       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:55 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 02:11:27AM +0000, Tian, Kevin wrote:

> >  	mutex_lock(&device->dev_set->lock);
> > -	if (!--device->open_count && device->ops->close_device)
> > +	vfio_assert_device_open(device);
> > +	if (device->open_count == 1 && device->ops->close_device)
> >  		device->ops->close_device(device);
> > +	device->open_count--;
> >  	mutex_unlock(&device->dev_set->lock);
> 
> Is it necessary to add assertion here? This is the only place to
> decrement the counter and no similar assertion in other release()/
> put() functions.

Necessary, no, but since we have it we may as well check it here. It
is common to check that refcounts don't underflow.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-22 20:55       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:55 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Vivi, Rodrigo, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Wang, Zhi A, Tony Krowiak, Eric Farman,
	Christoph Hellwig, Liu, Yi L

On Fri, Apr 22, 2022 at 02:11:27AM +0000, Tian, Kevin wrote:

> >  	mutex_lock(&device->dev_set->lock);
> > -	if (!--device->open_count && device->ops->close_device)
> > +	vfio_assert_device_open(device);
> > +	if (device->open_count == 1 && device->ops->close_device)
> >  		device->ops->close_device(device);
> > +	device->open_count--;
> >  	mutex_unlock(&device->dev_set->lock);
> 
> Is it necessary to add assertion here? This is the only place to
> decrement the counter and no similar assertion in other release()/
> put() functions.

Necessary, no, but since we have it we may as well check it here. It
is common to check that refcounts don't underflow.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-22  6:22     ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-22 20:56       ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> Nit: why do some of these patches that don't touch the mdev code
> mdev in the subject?

I consider these APIs to be 'mdev apis' because only mdev drivers
should call them.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-22 20:56       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Tony Krowiak, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> Nit: why do some of these patches that don't touch the mdev code
> mdev in the subject?

I consider these APIs to be 'mdev apis' because only mdev drivers
should call them.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-22 20:56       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-22 20:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Tony Krowiak, Eric Farman, Tian, Kevin,
	Liu, Yi L

On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> Nit: why do some of these patches that don't touch the mdev code
> mdev in the subject?

I consider these APIs to be 'mdev apis' because only mdev drivers
should call them.

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-22 20:56       ` [Intel-gfx] " Jason Gunthorpe
@ 2022-04-23  5:15         ` Christoph Hellwig
  -1 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-23  5:15 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Christoph Hellwig, Alexander Gordeev, David Airlie,
	Alex Williamson, Christian Borntraeger, Cornelia Huck,
	Jonathan Corbet, Daniel Vetter, dri-devel, Harald Freudenberger,
	Vasily Gorbik, Heiko Carstens, intel-gfx, intel-gvt-dev,
	Jani Nikula, Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede,
	linux-doc, linux-s390, Matthew Rosato, Peter Oberparleiter,
	Halil Pasic, Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin,
	Vineeth Vijayan, Zhenyu Wang, Zhi Wang, Tony Krowiak,
	Eric Farman, Tian, Kevin, Liu, Yi L

On Fri, Apr 22, 2022 at 05:56:51PM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> > Nit: why do some of these patches that don't touch the mdev code
> > mdev in the subject?
> 
> I consider these APIs to be 'mdev apis' because only mdev drivers
> should call them.

I thought we settled on 'emulated IOMMU device' in the VFIO core for
them, leaving mdev just for the odd lifecycle management glue.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-23  5:15         ` Christoph Hellwig
  0 siblings, 0 replies; 80+ messages in thread
From: Christoph Hellwig @ 2022-04-23  5:15 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 22, 2022 at 05:56:51PM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> > Nit: why do some of these patches that don't touch the mdev code
> > mdev in the subject?
> 
> I consider these APIs to be 'mdev apis' because only mdev drivers
> should call them.

I thought we settled on 'emulated IOMMU device' in the VFIO core for
them, leaving mdev just for the odd lifecycle management glue.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
  2022-04-23  5:15         ` [Intel-gfx] " Christoph Hellwig
  (?)
@ 2022-04-23 12:49           ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-23 12:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Alexander Gordeev, David Airlie, Alex Williamson,
	Christian Borntraeger, Cornelia Huck, Jonathan Corbet,
	Daniel Vetter, dri-devel, Harald Freudenberger, Vasily Gorbik,
	Heiko Carstens, intel-gfx, intel-gvt-dev, Jani Nikula,
	Jason Herne, Joonas Lahtinen, kvm, Kirti Wankhede, linux-doc,
	linux-s390, Matthew Rosato, Peter Oberparleiter, Halil Pasic,
	Rodrigo Vivi, Sven Schnelle, Tvrtko Ursulin, Vineeth Vijayan,
	Zhenyu Wang, Zhi Wang, Tony Krowiak, Eric Farman, Tian, Kevin,
	Liu, Yi L

On Sat, Apr 23, 2022 at 07:15:52AM +0200, Christoph Hellwig wrote:
> On Fri, Apr 22, 2022 at 05:56:51PM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> > > Nit: why do some of these patches that don't touch the mdev code
> > > mdev in the subject?
> > 
> > I consider these APIs to be 'mdev apis' because only mdev drivers
> > should call them.
> 
> I thought we settled on 'emulated IOMMU device' in the VFIO core for
> them, leaving mdev just for the odd lifecycle management glue.

Yes

vfio/emulated

?

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-23 12:49           ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-23 12:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev, linux-s390,
	Liu, Yi L, Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Zhi Wang, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Alex Williamson,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Jason Herne,
	Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle

On Sat, Apr 23, 2022 at 07:15:52AM +0200, Christoph Hellwig wrote:
> On Fri, Apr 22, 2022 at 05:56:51PM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> > > Nit: why do some of these patches that don't touch the mdev code
> > > mdev in the subject?
> > 
> > I consider these APIs to be 'mdev apis' because only mdev drivers
> > should call them.
> 
> I thought we settled on 'emulated IOMMU device' in the VFIO core for
> them, leaving mdev just for the odd lifecycle management glue.

Yes

vfio/emulated

?

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages()
@ 2022-04-23 12:49           ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-23 12:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, linux-s390, Liu, Yi L,
	Matthew Rosato, Jonathan Corbet, Halil Pasic,
	Christian Borntraeger, intel-gfx, Tony Krowiak, Eric Farman,
	Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Sat, Apr 23, 2022 at 07:15:52AM +0200, Christoph Hellwig wrote:
> On Fri, Apr 22, 2022 at 05:56:51PM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 22, 2022 at 08:22:32AM +0200, Christoph Hellwig wrote:
> > > Nit: why do some of these patches that don't touch the mdev code
> > > mdev in the subject?
> > 
> > I consider these APIs to be 'mdev apis' because only mdev drivers
> > should call them.
> 
> I thought we settled on 'emulated IOMMU device' in the VFIO core for
> them, leaving mdev just for the odd lifecycle management glue.

Yes

vfio/emulated

?

Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
  2022-04-22 17:59       ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-24  5:52         ` Tian, Kevin
  -1 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-24  5:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Joonas Lahtinen, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Wang, Zhi A, Tony Krowiak, Eric Farman, Vasily Gorbik,
	Heiko Carstens, Jani Nikula, Alex Williamson,
	Harald Freudenberger, Zhenyu Wang, Vivi, Rodrigo, intel-gvt-dev,
	Jason Herne, Tvrtko Ursulin, Cornelia Huck, Peter Oberparleiter,
	Sven Schnelle, Daniel Vetter

> From: Jason Gunthorpe
> Sent: Saturday, April 23, 2022 2:00 AM
> 
> On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, April 22, 2022 12:29 AM
> > >
> > > Every caller has a readily available vfio_device pointer, use that instead
> > > of passing in a generic struct device. The struct vfio_device already
> > > contains the group we need so this avoids complexity, extra refcountings,
> > > and a confusing lifecycle model.
> >
> > Using the same description as last patch leaves the impression that
> > the two patches do the exactly same type of change. But this
> > patch actually includes one more change to grab a reference on the
> > container. This is worth an explanation.
> 
> How about this:
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. Change vfio_dma_rw() to take in the
> struct vfio_device and move the container users that would have been held
> by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
> vfio_pin/unpin_pages().
> 

Yes, this is clearer.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* RE: [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-24  5:52         ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-24  5:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Wang, Zhi A,
	Jason Herne, Eric Farman, Vasily Gorbik, intel-gfx,
	Alex Williamson, Harald Freudenberger, Vivi, Rodrigo,
	intel-gvt-dev, Tony Krowiak, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe
> Sent: Saturday, April 23, 2022 2:00 AM
> 
> On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, April 22, 2022 12:29 AM
> > >
> > > Every caller has a readily available vfio_device pointer, use that instead
> > > of passing in a generic struct device. The struct vfio_device already
> > > contains the group we need so this avoids complexity, extra refcountings,
> > > and a confusing lifecycle model.
> >
> > Using the same description as last patch leaves the impression that
> > the two patches do the exactly same type of change. But this
> > patch actually includes one more change to grab a reference on the
> > container. This is worth an explanation.
> 
> How about this:
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. Change vfio_dma_rw() to take in the
> struct vfio_device and move the container users that would have been held
> by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
> vfio_pin/unpin_pages().
> 

Yes, this is clearer.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw()
@ 2022-04-24  5:52         ` Tian, Kevin
  0 siblings, 0 replies; 80+ messages in thread
From: Tian, Kevin @ 2022-04-24  5:52 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Matthew Rosato, linux-doc, David Airlie, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, kvm, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, Heiko Carstens, Jason Herne,
	Eric Farman, Vasily Gorbik, intel-gfx, Harald Freudenberger,
	Vivi, Rodrigo, intel-gvt-dev, Tony Krowiak, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

> From: Jason Gunthorpe
> Sent: Saturday, April 23, 2022 2:00 AM
> 
> On Fri, Apr 22, 2022 at 01:50:00AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Friday, April 22, 2022 12:29 AM
> > >
> > > Every caller has a readily available vfio_device pointer, use that instead
> > > of passing in a generic struct device. The struct vfio_device already
> > > contains the group we need so this avoids complexity, extra refcountings,
> > > and a confusing lifecycle model.
> >
> > Using the same description as last patch leaves the impression that
> > the two patches do the exactly same type of change. But this
> > patch actually includes one more change to grab a reference on the
> > container. This is worth an explanation.
> 
> How about this:
> 
> Every caller has a readily available vfio_device pointer, use that instead
> of passing in a generic struct device. Change vfio_dma_rw() to take in the
> struct vfio_device and move the container users that would have been held
> by vfio_group_get_external_user_from_dev() to vfio_dma_rw() directly, like
> vfio_pin/unpin_pages().
> 

Yes, this is clearer.

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
  2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-29 17:31   ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-29 17:31 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Alexander Gordeev, David Airlie, Christian Borntraeger,
	Cornelia Huck, Jonathan Corbet, Daniel Vetter, dri-devel,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang,
	Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L,
	Christoph Hellwig

On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> Prior series have transformed other parts of VFIO from working on struct
> device or struct vfio_group into working directly on struct
> vfio_device. Based on that work we now have vfio_device's readily
> available in all the drivers.
> 
> Update the rest of the driver facing API to use vfio_device as an input.
> 
> The following are switched from struct device to struct vfio_device:
>   vfio_register_notifier()
>   vfio_unregister_notifier()
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
> 
> The following group APIs are obsoleted and removed by just using struct
> vfio_device with the above:
>   vfio_group_pin_pages()
>   vfio_group_unpin_pages()
>   vfio_group_iommu_domain()
>   vfio_group_get_external_user_from_dev()
> 
> To retain the performance of the new device APIs relative to their group
> versions optimize how vfio_group_add_container_user() is used to avoid
> calling it when the driver must already guarantee the device is open and
> the container_users incrd.
> 
> The remaining exported VFIO group interfaces are only used by kvm, and are
> addressed by a parallel series.
> 
> This series is based on Christoph's gvt rework here:
> 
>  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> 
> and so will need the PR merged first.

Hi Alex,

Since all the shared branch PRs are ready, do you have any remarks on
this series and the others before I rebase and repost them?

This one has a few changes to the commit messages outstanding, but v2
didn't have any code changes.

Also, what order would like the different series in - they conflict
with each other a little bit. I suggest this:

- mdev group removal (this one)
- Remove vfio_device_get_from_dev()
  https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
- Remove group from kvm
  https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

All of them seem to have got enough reviews now.

I have one more series on this group topic and a few little patches still

It would be great if you could merge the gvt and iommu series together
into your tree toward linux-next so I can post patches against a
stable commit ID so the build-bots can test them.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-29 17:31   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-29 17:31 UTC (permalink / raw)
  To: Alex Williamson
  Cc: kvm, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Tony Krowiak, Tian, Kevin, Vasily Gorbik,
	Heiko Carstens, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> Prior series have transformed other parts of VFIO from working on struct
> device or struct vfio_group into working directly on struct
> vfio_device. Based on that work we now have vfio_device's readily
> available in all the drivers.
> 
> Update the rest of the driver facing API to use vfio_device as an input.
> 
> The following are switched from struct device to struct vfio_device:
>   vfio_register_notifier()
>   vfio_unregister_notifier()
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
> 
> The following group APIs are obsoleted and removed by just using struct
> vfio_device with the above:
>   vfio_group_pin_pages()
>   vfio_group_unpin_pages()
>   vfio_group_iommu_domain()
>   vfio_group_get_external_user_from_dev()
> 
> To retain the performance of the new device APIs relative to their group
> versions optimize how vfio_group_add_container_user() is used to avoid
> calling it when the driver must already guarantee the device is open and
> the container_users incrd.
> 
> The remaining exported VFIO group interfaces are only used by kvm, and are
> addressed by a parallel series.
> 
> This series is based on Christoph's gvt rework here:
> 
>  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> 
> and so will need the PR merged first.

Hi Alex,

Since all the shared branch PRs are ready, do you have any remarks on
this series and the others before I rebase and repost them?

This one has a few changes to the commit messages outstanding, but v2
didn't have any code changes.

Also, what order would like the different series in - they conflict
with each other a little bit. I suggest this:

- mdev group removal (this one)
- Remove vfio_device_get_from_dev()
  https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
- Remove group from kvm
  https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

All of them seem to have got enough reviews now.

I have one more series on this group topic and a few little patches still

It would be great if you could merge the gvt and iommu series together
into your tree toward linux-next so I can post patches against a
stable commit ID so the build-bots can test them.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-29 17:31   ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-04-29 17:31 UTC (permalink / raw)
  To: Alex Williamson
  Cc: kvm, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Tony Krowiak, Vasily Gorbik, Heiko Carstens,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Jason Herne,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle

On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> Prior series have transformed other parts of VFIO from working on struct
> device or struct vfio_group into working directly on struct
> vfio_device. Based on that work we now have vfio_device's readily
> available in all the drivers.
> 
> Update the rest of the driver facing API to use vfio_device as an input.
> 
> The following are switched from struct device to struct vfio_device:
>   vfio_register_notifier()
>   vfio_unregister_notifier()
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
> 
> The following group APIs are obsoleted and removed by just using struct
> vfio_device with the above:
>   vfio_group_pin_pages()
>   vfio_group_unpin_pages()
>   vfio_group_iommu_domain()
>   vfio_group_get_external_user_from_dev()
> 
> To retain the performance of the new device APIs relative to their group
> versions optimize how vfio_group_add_container_user() is used to avoid
> calling it when the driver must already guarantee the device is open and
> the container_users incrd.
> 
> The remaining exported VFIO group interfaces are only used by kvm, and are
> addressed by a parallel series.
> 
> This series is based on Christoph's gvt rework here:
> 
>  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> 
> and so will need the PR merged first.

Hi Alex,

Since all the shared branch PRs are ready, do you have any remarks on
this series and the others before I rebase and repost them?

This one has a few changes to the commit messages outstanding, but v2
didn't have any code changes.

Also, what order would like the different series in - they conflict
with each other a little bit. I suggest this:

- mdev group removal (this one)
- Remove vfio_device_get_from_dev()
  https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
- Remove group from kvm
  https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

All of them seem to have got enough reviews now.

I have one more series on this group topic and a few little patches still

It would be great if you could merge the gvt and iommu series together
into your tree toward linux-next so I can post patches against a
stable commit ID so the build-bots can test them.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
  (?)
@ 2022-04-29 20:28     ` Alex Williamson
  -1 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 20:28 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Christian Borntraeger,
	Cornelia Huck, Jonathan Corbet, Daniel Vetter, dri-devel,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang,
	Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu,
	Yi L

On Thu, 21 Apr 2022 13:28:38 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.

A comment here explaining that decrementing open_count is pushed until
after close_device to support this feature would help to explain the
somewhat subtle change in vfio_group_get_device_fd().

Otherwise the series looks ok with fixes noted by previous reviews.
Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-29 20:28     ` Alex Williamson
  0 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 20:28 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Thu, 21 Apr 2022 13:28:38 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.

A comment here explaining that decrementing open_count is pushed until
after close_device to support this feature would help to explain the
somewhat subtle change in vfio_group_get_device_fd().

Otherwise the series looks ok with fixes noted by previous reviews.
Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-04-29 20:28     ` Alex Williamson
  0 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 20:28 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Tony Krowiak, Eric Farman, Vasily Gorbik,
	Heiko Carstens, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Thu, 21 Apr 2022 13:28:38 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> When the open_device() op is called the container_users is incremented and
> held incremented until close_device(). Thus, so long as drivers call
> functions within their open_device()/close_device() region they do not
> need to worry about the container_users.
> 
> These functions can all only be called between open_device() and
> close_device():
> 
>   vfio_pin_pages()
>   vfio_unpin_pages()
>   vfio_dma_rw()
>   vfio_register_notifier()
>   vfio_unregister_notifier()
> 
> Eliminate the calls to vfio_group_add_container_user() and add
> vfio_assert_device_open() to detect driver mis-use.

A comment here explaining that decrementing open_count is pushed until
after close_device to support this feature would help to explain the
somewhat subtle change in vfio_group_get_device_fd().

Otherwise the series looks ok with fixes noted by previous reviews.
Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
  2022-04-29 17:31   ` Jason Gunthorpe
  (?)
@ 2022-04-29 22:22     ` Alex Williamson
  -1 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 22:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Alexander Gordeev, David Airlie, Christian Borntraeger,
	Cornelia Huck, Jonathan Corbet, Daniel Vetter, dri-devel,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang,
	Tony Krowiak, Tian, Kevin, Eric Farman, Liu, Yi L,
	Christoph Hellwig

On Fri, 29 Apr 2022 14:31:49 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> > Prior series have transformed other parts of VFIO from working on struct
> > device or struct vfio_group into working directly on struct
> > vfio_device. Based on that work we now have vfio_device's readily
> > available in all the drivers.
> > 
> > Update the rest of the driver facing API to use vfio_device as an input.
> > 
> > The following are switched from struct device to struct vfio_device:
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> > 
> > The following group APIs are obsoleted and removed by just using struct
> > vfio_device with the above:
> >   vfio_group_pin_pages()
> >   vfio_group_unpin_pages()
> >   vfio_group_iommu_domain()
> >   vfio_group_get_external_user_from_dev()
> > 
> > To retain the performance of the new device APIs relative to their group
> > versions optimize how vfio_group_add_container_user() is used to avoid
> > calling it when the driver must already guarantee the device is open and
> > the container_users incrd.
> > 
> > The remaining exported VFIO group interfaces are only used by kvm, and are
> > addressed by a parallel series.
> > 
> > This series is based on Christoph's gvt rework here:
> > 
> >  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> > 
> > and so will need the PR merged first.  
> 
> Hi Alex,
> 
> Since all the shared branch PRs are ready, do you have any remarks on
> this series and the others before I rebase and repost them?

Only the nit in the commit log:
https://lore.kernel.org/all/20220429142820.6afe7bbe.alex.williamson@redhat.com/ 

> This one has a few changes to the commit messages outstanding, but v2
> didn't have any code changes.
> 
> Also, what order would like the different series in - they conflict
> with each other a little bit. I suggest this:
> 
> - mdev group removal (this one)
> - Remove vfio_device_get_from_dev()
>   https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
> - Remove group from kvm
>   https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

I think you mean (v2):

https://lore.kernel.org/all/0-v2-6a528653a750+1578a-vfio_kvm_no_group_jgg@nvidia.com/

Otherwise, thanks for sorting these out for me.

> All of them seem to have got enough reviews now.
>
> I have one more series on this group topic and a few little patches still
> 
> It would be great if you could merge the gvt and iommu series together
> into your tree toward linux-next so I can post patches against a
> stable commit ID so the build-bots can test them.

Please check my vfio next branch and see if this matches what you're
looking for:

https://github.com/awilliam/linux-vfio/commits/next

I'll look for any fallout from Stephen and build bots on Monday's
linux-next compilation.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-29 22:22     ` Alex Williamson
  0 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 22:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Tony Krowiak, Tian,  Kevin, Vasily Gorbik,
	Heiko Carstens, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, 29 Apr 2022 14:31:49 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> > Prior series have transformed other parts of VFIO from working on struct
> > device or struct vfio_group into working directly on struct
> > vfio_device. Based on that work we now have vfio_device's readily
> > available in all the drivers.
> > 
> > Update the rest of the driver facing API to use vfio_device as an input.
> > 
> > The following are switched from struct device to struct vfio_device:
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> > 
> > The following group APIs are obsoleted and removed by just using struct
> > vfio_device with the above:
> >   vfio_group_pin_pages()
> >   vfio_group_unpin_pages()
> >   vfio_group_iommu_domain()
> >   vfio_group_get_external_user_from_dev()
> > 
> > To retain the performance of the new device APIs relative to their group
> > versions optimize how vfio_group_add_container_user() is used to avoid
> > calling it when the driver must already guarantee the device is open and
> > the container_users incrd.
> > 
> > The remaining exported VFIO group interfaces are only used by kvm, and are
> > addressed by a parallel series.
> > 
> > This series is based on Christoph's gvt rework here:
> > 
> >  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> > 
> > and so will need the PR merged first.  
> 
> Hi Alex,
> 
> Since all the shared branch PRs are ready, do you have any remarks on
> this series and the others before I rebase and repost them?

Only the nit in the commit log:
https://lore.kernel.org/all/20220429142820.6afe7bbe.alex.williamson@redhat.com/ 

> This one has a few changes to the commit messages outstanding, but v2
> didn't have any code changes.
> 
> Also, what order would like the different series in - they conflict
> with each other a little bit. I suggest this:
> 
> - mdev group removal (this one)
> - Remove vfio_device_get_from_dev()
>   https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
> - Remove group from kvm
>   https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

I think you mean (v2):

https://lore.kernel.org/all/0-v2-6a528653a750+1578a-vfio_kvm_no_group_jgg@nvidia.com/

Otherwise, thanks for sorting these out for me.

> All of them seem to have got enough reviews now.
>
> I have one more series on this group topic and a few little patches still
> 
> It would be great if you could merge the gvt and iommu series together
> into your tree toward linux-next so I can post patches against a
> stable commit ID so the build-bots can test them.

Please check my vfio next branch and see if this matches what you're
looking for:

https://github.com/awilliam/linux-vfio/commits/next

I'll look for any fallout from Stephen and build bots on Monday's
linux-next compilation.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device
@ 2022-04-29 22:22     ` Alex Williamson
  0 siblings, 0 replies; 80+ messages in thread
From: Alex Williamson @ 2022-04-29 22:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: kvm, linux-doc, David Airlie, Eric Farman, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Tony Krowiak, Vasily Gorbik, Heiko Carstens,
	Harald Freudenberger, Rodrigo Vivi, intel-gvt-dev, Jason Herne,
	Cornelia Huck, Peter Oberparleiter, Sven Schnelle

On Fri, 29 Apr 2022 14:31:49 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:

> On Thu, Apr 21, 2022 at 01:28:31PM -0300, Jason Gunthorpe wrote:
> > Prior series have transformed other parts of VFIO from working on struct
> > device or struct vfio_group into working directly on struct
> > vfio_device. Based on that work we now have vfio_device's readily
> > available in all the drivers.
> > 
> > Update the rest of the driver facing API to use vfio_device as an input.
> > 
> > The following are switched from struct device to struct vfio_device:
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> > 
> > The following group APIs are obsoleted and removed by just using struct
> > vfio_device with the above:
> >   vfio_group_pin_pages()
> >   vfio_group_unpin_pages()
> >   vfio_group_iommu_domain()
> >   vfio_group_get_external_user_from_dev()
> > 
> > To retain the performance of the new device APIs relative to their group
> > versions optimize how vfio_group_add_container_user() is used to avoid
> > calling it when the driver must already guarantee the device is open and
> > the container_users incrd.
> > 
> > The remaining exported VFIO group interfaces are only used by kvm, and are
> > addressed by a parallel series.
> > 
> > This series is based on Christoph's gvt rework here:
> > 
> >  https://lore.kernel.org/all/5a8b9f48-2c32-8177-1c18-e3bd7bfde558@intel.com/
> > 
> > and so will need the PR merged first.  
> 
> Hi Alex,
> 
> Since all the shared branch PRs are ready, do you have any remarks on
> this series and the others before I rebase and repost them?

Only the nit in the commit log:
https://lore.kernel.org/all/20220429142820.6afe7bbe.alex.williamson@redhat.com/ 

> This one has a few changes to the commit messages outstanding, but v2
> didn't have any code changes.
> 
> Also, what order would like the different series in - they conflict
> with each other a little bit. I suggest this:
> 
> - mdev group removal (this one)
> - Remove vfio_device_get_from_dev()
>   https://lore.kernel.org/r/0-v1-7f2292e6b2ba+44839-vfio_get_from_dev_jgg@nvidia.com
> - Remove group from kvm
>   https://lore.kernel.org/r/0-v1-33906a626da1+16b0-vfio_kvm_no_group_jgg@nvidia.com

I think you mean (v2):

https://lore.kernel.org/all/0-v2-6a528653a750+1578a-vfio_kvm_no_group_jgg@nvidia.com/

Otherwise, thanks for sorting these out for me.

> All of them seem to have got enough reviews now.
>
> I have one more series on this group topic and a few little patches still
> 
> It would be great if you could merge the gvt and iommu series together
> into your tree toward linux-next so I can post patches against a
> stable commit ID so the build-bots can test them.

Please check my vfio next branch and see if this matches what you're
looking for:

https://github.com/awilliam/linux-vfio/commits/next

I'll look for any fallout from Stephen and build bots on Monday's
linux-next compilation.  Thanks,

Alex


^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
  2022-04-29 20:28     ` [Intel-gfx] " Alex Williamson
  (?)
@ 2022-05-02 16:22       ` Jason Gunthorpe
  -1 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-05-02 16:22 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Alexander Gordeev, David Airlie, Christian Borntraeger,
	Cornelia Huck, Jonathan Corbet, Daniel Vetter, dri-devel,
	Harald Freudenberger, Vasily Gorbik, Heiko Carstens, intel-gfx,
	intel-gvt-dev, Jani Nikula, Jason Herne, Joonas Lahtinen, kvm,
	Kirti Wankhede, linux-doc, linux-s390, Matthew Rosato,
	Peter Oberparleiter, Halil Pasic, Rodrigo Vivi, Sven Schnelle,
	Tvrtko Ursulin, Vineeth Vijayan, Zhenyu Wang, Zhi Wang,
	Tony Krowiak, Eric Farman, Christoph Hellwig, Tian, Kevin, Liu,
	Yi L

On Fri, Apr 29, 2022 at 02:28:20PM -0600, Alex Williamson wrote:
> On Thu, 21 Apr 2022 13:28:38 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between open_device() and
> > close_device():
> > 
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> > 
> > Eliminate the calls to vfio_group_add_container_user() and add
> > vfio_assert_device_open() to detect driver mis-use.
> 
> A comment here explaining that decrementing open_count is pushed until
> after close_device to support this feature would help to explain the
> somewhat subtle change in vfio_group_get_device_fd().

I changed it like this:

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use. This causes the
close_device() op to check device->open_count so always leave it elevated
while calling the op.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-05-02 16:22       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-05-02 16:22 UTC (permalink / raw)
  To: Alex Williamson
  Cc: kvm, linux-doc, David Airlie, Tian, Kevin, dri-devel,
	Kirti Wankhede, Vineeth Vijayan, Alexander Gordeev,
	Christoph Hellwig, linux-s390, Liu, Yi L, Matthew Rosato,
	Jonathan Corbet, Halil Pasic, Christian Borntraeger, intel-gfx,
	Zhi Wang, Tony Krowiak, Eric Farman, Vasily Gorbik,
	Heiko Carstens, Harald Freudenberger, Rodrigo Vivi,
	intel-gvt-dev, Jason Herne, Tvrtko Ursulin, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 29, 2022 at 02:28:20PM -0600, Alex Williamson wrote:
> On Thu, 21 Apr 2022 13:28:38 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between open_device() and
> > close_device():
> > 
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> > 
> > Eliminate the calls to vfio_group_add_container_user() and add
> > vfio_assert_device_open() to detect driver mis-use.
> 
> A comment here explaining that decrementing open_count is pushed until
> after close_device to support this feature would help to explain the
> somewhat subtle change in vfio_group_get_device_fd().

I changed it like this:

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use. This causes the
close_device() op to check device->open_count so always leave it elevated
while calling the op.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [Intel-gfx] [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user()
@ 2022-05-02 16:22       ` Jason Gunthorpe
  0 siblings, 0 replies; 80+ messages in thread
From: Jason Gunthorpe @ 2022-05-02 16:22 UTC (permalink / raw)
  To: Alex Williamson
  Cc: kvm, linux-doc, David Airlie, dri-devel, Kirti Wankhede,
	Vineeth Vijayan, Alexander Gordeev, Christoph Hellwig,
	linux-s390, Liu, Yi L, Matthew Rosato, Jonathan Corbet,
	Halil Pasic, Christian Borntraeger, intel-gfx, Tony Krowiak,
	Eric Farman, Vasily Gorbik, Heiko Carstens, Harald Freudenberger,
	Rodrigo Vivi, intel-gvt-dev, Jason Herne, Cornelia Huck,
	Peter Oberparleiter, Sven Schnelle

On Fri, Apr 29, 2022 at 02:28:20PM -0600, Alex Williamson wrote:
> On Thu, 21 Apr 2022 13:28:38 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
> 
> > When the open_device() op is called the container_users is incremented and
> > held incremented until close_device(). Thus, so long as drivers call
> > functions within their open_device()/close_device() region they do not
> > need to worry about the container_users.
> > 
> > These functions can all only be called between open_device() and
> > close_device():
> > 
> >   vfio_pin_pages()
> >   vfio_unpin_pages()
> >   vfio_dma_rw()
> >   vfio_register_notifier()
> >   vfio_unregister_notifier()
> > 
> > Eliminate the calls to vfio_group_add_container_user() and add
> > vfio_assert_device_open() to detect driver mis-use.
> 
> A comment here explaining that decrementing open_count is pushed until
> after close_device to support this feature would help to explain the
> somewhat subtle change in vfio_group_get_device_fd().

I changed it like this:

Eliminate the calls to vfio_group_add_container_user() and add
vfio_assert_device_open() to detect driver mis-use. This causes the
close_device() op to check device->open_count so always leave it elevated
while calling the op.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 80+ messages in thread

end of thread, other threads:[~2022-05-02 16:22 UTC | newest]

Thread overview: 80+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21 16:28 [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device Jason Gunthorpe
2022-04-21 16:28 ` Jason Gunthorpe
2022-04-21 16:28 ` [Intel-gfx] " Jason Gunthorpe
2022-04-21 16:28 ` [PATCH v2 1/7] vfio: Make vfio_(un)register_notifier accept a vfio_device Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  1:39   ` Tian, Kevin
2022-04-22  1:39     ` [Intel-gfx] " Tian, Kevin
2022-04-22 17:07     ` Jason Gunthorpe
2022-04-22 17:07       ` [Intel-gfx] " Jason Gunthorpe
2022-04-22 17:07       ` Jason Gunthorpe
2022-04-22  6:21   ` Christoph Hellwig
2022-04-22  6:21     ` [Intel-gfx] " Christoph Hellwig
2022-04-21 16:28 ` [PATCH v2 2/7] vfio/ccw: Remove mdev from struct channel_program Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-21 16:28 ` [PATCH v2 3/7] vfio/mdev: Pass in a struct vfio_device * to vfio_pin/unpin_pages() Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  1:43   ` Tian, Kevin
2022-04-22  1:43     ` [Intel-gfx] " Tian, Kevin
2022-04-22  1:43     ` Tian, Kevin
2022-04-22  6:22   ` Christoph Hellwig
2022-04-22  6:22     ` [Intel-gfx] " Christoph Hellwig
2022-04-22 20:56     ` Jason Gunthorpe
2022-04-22 20:56       ` Jason Gunthorpe
2022-04-22 20:56       ` [Intel-gfx] " Jason Gunthorpe
2022-04-23  5:15       ` Christoph Hellwig
2022-04-23  5:15         ` [Intel-gfx] " Christoph Hellwig
2022-04-23 12:49         ` Jason Gunthorpe
2022-04-23 12:49           ` [Intel-gfx] " Jason Gunthorpe
2022-04-23 12:49           ` Jason Gunthorpe
2022-04-21 16:28 ` [PATCH v2 4/7] vfio/mdev: Pass in a struct vfio_device * to vfio_dma_rw() Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  1:50   ` Tian, Kevin
2022-04-22  1:50     ` [Intel-gfx] " Tian, Kevin
2022-04-22  1:50     ` Tian, Kevin
2022-04-22 17:59     ` Jason Gunthorpe
2022-04-22 17:59       ` Jason Gunthorpe
2022-04-22 17:59       ` [Intel-gfx] " Jason Gunthorpe
2022-04-24  5:52       ` Tian, Kevin
2022-04-24  5:52         ` [Intel-gfx] " Tian, Kevin
2022-04-24  5:52         ` Tian, Kevin
2022-04-21 16:28 ` [PATCH v2 5/7] drm/i915/gvt: Change from vfio_group_(un)pin_pages to vfio_(un)pin_pages Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  1:51   ` Tian, Kevin
2022-04-22  1:51     ` Tian, Kevin
2022-04-22  1:51     ` Tian, Kevin
2022-04-22  6:22   ` Christoph Hellwig
2022-04-22  6:22     ` [Intel-gfx] " Christoph Hellwig
2022-04-21 16:28 ` [PATCH v2 6/7] vfio: Remove dead code Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  1:52   ` Tian, Kevin
2022-04-22  1:52     ` [Intel-gfx] " Tian, Kevin
2022-04-22  1:52     ` Tian, Kevin
2022-04-21 16:28 ` [PATCH v2 7/7] vfio: Remove calls to vfio_group_add_container_user() Jason Gunthorpe
2022-04-21 16:28   ` Jason Gunthorpe
2022-04-21 16:28   ` [Intel-gfx] " Jason Gunthorpe
2022-04-22  2:11   ` Tian, Kevin
2022-04-22  2:11     ` [Intel-gfx] " Tian, Kevin
2022-04-22  2:11     ` Tian, Kevin
2022-04-22 20:55     ` Jason Gunthorpe
2022-04-22 20:55       ` Jason Gunthorpe
2022-04-22 20:55       ` [Intel-gfx] " Jason Gunthorpe
2022-04-29 20:28   ` Alex Williamson
2022-04-29 20:28     ` Alex Williamson
2022-04-29 20:28     ` [Intel-gfx] " Alex Williamson
2022-05-02 16:22     ` Jason Gunthorpe
2022-05-02 16:22       ` [Intel-gfx] " Jason Gunthorpe
2022-05-02 16:22       ` Jason Gunthorpe
2022-04-21 21:25 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Make the rest of the VFIO driver interface use vfio_device (rev3) Patchwork
2022-04-29 17:31 ` [PATCH v2 0/7] Make the rest of the VFIO driver interface use vfio_device Jason Gunthorpe
2022-04-29 17:31   ` [Intel-gfx] " Jason Gunthorpe
2022-04-29 17:31   ` Jason Gunthorpe
2022-04-29 22:22   ` Alex Williamson
2022-04-29 22:22     ` [Intel-gfx] " Alex Williamson
2022-04-29 22:22     ` Alex Williamson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.