All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 00/10]  Move group specific code into group.c
@ 2022-11-23 15:01 Yi Liu
  2022-11-23 15:01 ` [RFC 01/10] vfio: Simplify vfio_create_group() Yi Liu
                   ` (10 more replies)
  0 siblings, 11 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

With the introduction of iommufd[1], VFIO is towarding to provide device
centric uAPI after adapting to iommufd. With this trend, existing VFIO
group infrastructure is optional once VFIO converted to device centric.

This series moves the group specific code out of vfio_main.c, prepares
for compiling group infrastructure out after adding vfio device cdev[2]

Complete code in below branch:

https://github.com/yiliu1765/iommufd/commits/vfio_group_split_rfcv1

This is based on Jason's "Connect VFIO to IOMMUFD"[3] and my "Make mdev driver
dma_unmap callback tolerant to unmaps come before device open"[4]

[1] https://lore.kernel.org/all/0-v5-4001c2997bd0+30c-iommufd_jgg@nvidia.com/
[2] https://github.com/yiliu1765/iommufd/tree/wip/vfio_device_cdev
[3] https://lore.kernel.org/kvm/063990c3-c244-1f7f-4e01-348023832066@intel.com/T/#t
[4] https://lore.kernel.org/kvm/20221123134832.429589-1-yi.l.liu@intel.com/T/#t

Regards,
	Yi Liu

Jason Gunthorpe (2):
  vfio: Simplify vfio_create_group()
  vfio: Move the sanity check of the group to vfio_create_group()

Yi Liu (8):
  vfio: Wrap group codes to be helpers for __vfio_register_dev() and
    unregister
  vfio: Make vfio_device_open() group agnostic
  vfio: Move device open/close code to be helpfers
  vfio: Swap order of vfio_device_container_register() and open_device()
  vfio: Refactor vfio_device_first_open() and _last_close()
  vfio: Wrap vfio group module init/clean code into helpers
  vfio: Refactor dma APIs for emulated devices
  vfio: Move vfio group specific code into group.c

 drivers/vfio/Makefile    |   1 +
 drivers/vfio/container.c |  20 +-
 drivers/vfio/group.c     | 834 +++++++++++++++++++++++++++++++++++++
 drivers/vfio/vfio.h      |  52 ++-
 drivers/vfio/vfio_main.c | 863 +++------------------------------------
 5 files changed, 942 insertions(+), 828 deletions(-)
 create mode 100644 drivers/vfio/group.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC 01/10] vfio: Simplify vfio_create_group()
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 02/10] vfio: Move the sanity check of the group to vfio_create_group() Yi Liu
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

From: Jason Gunthorpe <jgg@nvidia.com>

The vfio.group_lock is now only used to serialize vfio_group creation
and destruction, we don't need a micro-optimization of searching,
unlocking, then allocating and searching again. Just hold the lock
the whole time.

Grabbed from:
https://lore.kernel.org/kvm/20220922152338.2a2238fe.alex.williamson@redhat.com/

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 33 ++++++++++-----------------------
 1 file changed, 10 insertions(+), 23 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index ea07b46dcea2..02f344441ddf 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -146,10 +146,12 @@ EXPORT_SYMBOL_GPL(vfio_device_set_open_count);
  * Group objects - create, release, get, put, search
  */
 static struct vfio_group *
-__vfio_group_get_from_iommu(struct iommu_group *iommu_group)
+vfio_group_get_from_iommu(struct iommu_group *iommu_group)
 {
 	struct vfio_group *group;
 
+	lockdep_assert_held(&vfio.group_lock);
+
 	/*
 	 * group->iommu_group from the vfio.group_list cannot be NULL
 	 * under the vfio.group_lock.
@@ -163,17 +165,6 @@ __vfio_group_get_from_iommu(struct iommu_group *iommu_group)
 	return NULL;
 }
 
-static struct vfio_group *
-vfio_group_get_from_iommu(struct iommu_group *iommu_group)
-{
-	struct vfio_group *group;
-
-	mutex_lock(&vfio.group_lock);
-	group = __vfio_group_get_from_iommu(iommu_group);
-	mutex_unlock(&vfio.group_lock);
-	return group;
-}
-
 static void vfio_group_release(struct device *dev)
 {
 	struct vfio_group *group = container_of(dev, struct vfio_group, dev);
@@ -228,6 +219,8 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
 	struct vfio_group *ret;
 	int err;
 
+	lockdep_assert_held(&vfio.group_lock);
+
 	group = vfio_group_alloc(iommu_group, type);
 	if (IS_ERR(group))
 		return group;
@@ -240,26 +233,16 @@ static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
 		goto err_put;
 	}
 
-	mutex_lock(&vfio.group_lock);
-
-	/* Did we race creating this group? */
-	ret = __vfio_group_get_from_iommu(iommu_group);
-	if (ret)
-		goto err_unlock;
-
 	err = cdev_device_add(&group->cdev, &group->dev);
 	if (err) {
 		ret = ERR_PTR(err);
-		goto err_unlock;
+		goto err_put;
 	}
 
 	list_add(&group->vfio_next, &vfio.group_list);
 
-	mutex_unlock(&vfio.group_lock);
 	return group;
 
-err_unlock:
-	mutex_unlock(&vfio.group_lock);
 err_put:
 	put_device(&group->dev);
 	return ret;
@@ -470,7 +453,9 @@ static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
 	if (ret)
 		goto out_put_group;
 
+	mutex_lock(&vfio.group_lock);
 	group = vfio_create_group(iommu_group, type);
+	mutex_unlock(&vfio.group_lock);
 	if (IS_ERR(group)) {
 		ret = PTR_ERR(group);
 		goto out_remove_device;
@@ -519,9 +504,11 @@ static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 		return ERR_PTR(-EINVAL);
 	}
 
+	mutex_lock(&vfio.group_lock);
 	group = vfio_group_get_from_iommu(iommu_group);
 	if (!group)
 		group = vfio_create_group(iommu_group, VFIO_IOMMU);
+	mutex_unlock(&vfio.group_lock);
 
 	/* The vfio_group holds a reference to the iommu_group */
 	iommu_group_put(iommu_group);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 02/10] vfio: Move the sanity check of the group to vfio_create_group()
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
  2022-11-23 15:01 ` [RFC 01/10] vfio: Simplify vfio_create_group() Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 03/10] vfio: Wrap group codes to be helpers for __vfio_register_dev() and unregister Yi Liu
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

From: Jason Gunthorpe <jgg@nvidia.com>

This avoids opening group specific code in __vfio_register_dev() for the
sanity check if an (existing) group is not corrupted by having two copies
of the same struct device in it. It also simplifies the error unwind for
this sanity check since the failure can be detected in the group allocation.

This also prepares for moving the group specific code into separate group.c.

Grabbed from:
https://lore.kernel.org/kvm/20220922152338.2a2238fe.alex.williamson@redhat.com/

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 62 ++++++++++++++++------------------------
 1 file changed, 25 insertions(+), 37 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 02f344441ddf..bd45b8907311 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -146,7 +146,7 @@ EXPORT_SYMBOL_GPL(vfio_device_set_open_count);
  * Group objects - create, release, get, put, search
  */
 static struct vfio_group *
-vfio_group_get_from_iommu(struct iommu_group *iommu_group)
+vfio_group_find_from_iommu(struct iommu_group *iommu_group)
 {
 	struct vfio_group *group;
 
@@ -157,10 +157,8 @@ vfio_group_get_from_iommu(struct iommu_group *iommu_group)
 	 * under the vfio.group_lock.
 	 */
 	list_for_each_entry(group, &vfio.group_list, vfio_next) {
-		if (group->iommu_group == iommu_group) {
-			refcount_inc(&group->drivers);
+		if (group->iommu_group == iommu_group)
 			return group;
-		}
 	}
 	return NULL;
 }
@@ -310,23 +308,6 @@ static bool vfio_device_try_get_registration(struct vfio_device *device)
 	return refcount_inc_not_zero(&device->refcount);
 }
 
-static struct vfio_device *vfio_group_get_device(struct vfio_group *group,
-						 struct device *dev)
-{
-	struct vfio_device *device;
-
-	mutex_lock(&group->device_lock);
-	list_for_each_entry(device, &group->device_list, group_next) {
-		if (device->dev == dev &&
-		    vfio_device_try_get_registration(device)) {
-			mutex_unlock(&group->device_lock);
-			return device;
-		}
-	}
-	mutex_unlock(&group->device_lock);
-	return NULL;
-}
-
 /*
  * VFIO driver API
  */
@@ -470,6 +451,21 @@ static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
 	return ERR_PTR(ret);
 }
 
+static bool vfio_group_has_device(struct vfio_group *group, struct device *dev)
+{
+	struct vfio_device *device;
+
+	mutex_lock(&group->device_lock);
+	list_for_each_entry(device, &group->device_list, group_next) {
+		if (device->dev == dev) {
+			mutex_unlock(&group->device_lock);
+			return true;
+		}
+	}
+	mutex_unlock(&group->device_lock);
+	return false;
+}
+
 static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 {
 	struct iommu_group *iommu_group;
@@ -505,9 +501,15 @@ static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 	}
 
 	mutex_lock(&vfio.group_lock);
-	group = vfio_group_get_from_iommu(iommu_group);
-	if (!group)
+	group = vfio_group_find_from_iommu(iommu_group);
+	if (group) {
+		if (WARN_ON(vfio_group_has_device(group, dev)))
+			group = ERR_PTR(-EINVAL);
+		else
+			refcount_inc(&group->drivers);
+	} else {
 		group = vfio_create_group(iommu_group, VFIO_IOMMU);
+	}
 	mutex_unlock(&vfio.group_lock);
 
 	/* The vfio_group holds a reference to the iommu_group */
@@ -518,7 +520,6 @@ static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 static int __vfio_register_dev(struct vfio_device *device,
 		struct vfio_group *group)
 {
-	struct vfio_device *existing_device;
 	int ret;
 
 	/*
@@ -540,19 +541,6 @@ static int __vfio_register_dev(struct vfio_device *device,
 	if (!device->dev_set)
 		vfio_assign_device_set(device, device);
 
-	existing_device = vfio_group_get_device(group, device->dev);
-	if (existing_device) {
-		/*
-		 * group->iommu_group is non-NULL because we hold the drivers
-		 * refcount.
-		 */
-		dev_WARN(device->dev, "Device already exists on group %d\n",
-			 iommu_group_id(group->iommu_group));
-		vfio_device_put_registration(existing_device);
-		ret = -EBUSY;
-		goto err_out;
-	}
-
 	/* Our reference on group is moved to the device */
 	device->group = group;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 03/10] vfio: Wrap group codes to be helpers for __vfio_register_dev() and unregister
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
  2022-11-23 15:01 ` [RFC 01/10] vfio: Simplify vfio_create_group() Yi Liu
  2022-11-23 15:01 ` [RFC 02/10] vfio: Move the sanity check of the group to vfio_create_group() Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 04/10] vfio: Make vfio_device_open() group agnostic Yi Liu
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This avoids to decode group fields in the common functions used by
vfio_device registration, and prepares for further moving vfio group
specific code into separate file.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index bd45b8907311..f4af3afcb26f 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -517,6 +517,20 @@ static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 	return group;
 }
 
+static void vfio_device_group_register(struct vfio_device *device)
+{
+	mutex_lock(&device->group->device_lock);
+	list_add(&device->group_next, &device->group->device_list);
+	mutex_unlock(&device->group->device_lock);
+}
+
+static void vfio_device_group_unregister(struct vfio_device *device)
+{
+	mutex_lock(&device->group->device_lock);
+	list_del(&device->group_next);
+	mutex_unlock(&device->group->device_lock);
+}
+
 static int __vfio_register_dev(struct vfio_device *device,
 		struct vfio_group *group)
 {
@@ -555,9 +569,7 @@ static int __vfio_register_dev(struct vfio_device *device,
 	/* Refcounting can't start until the driver calls register */
 	refcount_set(&device->refcount, 1);
 
-	mutex_lock(&group->device_lock);
-	list_add(&device->group_next, &group->device_list);
-	mutex_unlock(&group->device_lock);
+	vfio_device_group_register(device);
 
 	return 0;
 err_out:
@@ -617,7 +629,6 @@ static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group,
  * removed.  Open file descriptors for the device... */
 void vfio_unregister_group_dev(struct vfio_device *device)
 {
-	struct vfio_group *group = device->group;
 	unsigned int i = 0;
 	bool interrupted = false;
 	long rc;
@@ -645,9 +656,7 @@ void vfio_unregister_group_dev(struct vfio_device *device)
 		}
 	}
 
-	mutex_lock(&group->device_lock);
-	list_del(&device->group_next);
-	mutex_unlock(&group->device_lock);
+	vfio_device_group_unregister(device);
 
 	/* Balances device_add in register path */
 	device_del(&device->device);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 04/10] vfio: Make vfio_device_open() group agnostic
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (2 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 03/10] vfio: Wrap group codes to be helpers for __vfio_register_dev() and unregister Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 05/10] vfio: Move device open/close code to be helpfers Yi Liu
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This prepares for moving group specific code to separate file.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index f4af3afcb26f..5e1e509e6477 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -878,9 +878,6 @@ static struct file *vfio_device_open(struct vfio_device *device)
 	 */
 	filep->f_mode |= (FMODE_PREAD | FMODE_PWRITE);
 
-	if (device->group->type == VFIO_NO_IOMMU)
-		dev_warn(device->dev, "vfio-noiommu device opened by user "
-			 "(%s:%d)\n", current->comm, task_pid_nr(current));
 	/*
 	 * On success the ref of device is moved to the file and
 	 * put in vfio_device_fops_release()
@@ -927,6 +924,10 @@ static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
 		goto err_put_fdno;
 	}
 
+	if (group->type == VFIO_NO_IOMMU)
+		dev_warn(device->dev, "vfio-noiommu device opened by user "
+			 "(%s:%d)\n", current->comm, task_pid_nr(current));
+
 	fd_install(fdno, filep);
 	return fdno;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 05/10] vfio: Move device open/close code to be helpfers
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (3 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 04/10] vfio: Make vfio_device_open() group agnostic Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 06/10] vfio: Swap order of vfio_device_container_register() and open_device() Yi Liu
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This makes the device open and close be in paired helpers.

vfio_device_open(), and vfio_device_close() handles the open_count, and
calls vfio_device_first_open(), and vfio_device_last_close() when open_count
condition is met. This also helps to avoid open code for device in the
vfio_group_ioctl_get_device_fd(), and prepares for further moving vfio group
specific code into separate file.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 46 +++++++++++++++++++++++++---------------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 5e1e509e6477..6ed5c2426464 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -846,20 +846,41 @@ static void vfio_device_last_close(struct vfio_device *device)
 	module_put(device->dev->driver->owner);
 }
 
-static struct file *vfio_device_open(struct vfio_device *device)
+static int vfio_device_open(struct vfio_device *device)
 {
-	struct file *filep;
-	int ret;
+	int ret = 0;
 
 	mutex_lock(&device->dev_set->lock);
 	device->open_count++;
 	if (device->open_count == 1) {
 		ret = vfio_device_first_open(device);
 		if (ret)
-			goto err_unlock;
+			device->open_count--;
 	}
 	mutex_unlock(&device->dev_set->lock);
 
+	return ret;
+}
+
+static void vfio_device_close(struct vfio_device *device)
+{
+	mutex_lock(&device->dev_set->lock);
+	vfio_assert_device_open(device);
+	if (device->open_count == 1)
+		vfio_device_last_close(device);
+	device->open_count--;
+	mutex_unlock(&device->dev_set->lock);
+}
+
+static struct file *vfio_device_open_file(struct vfio_device *device)
+{
+	struct file *filep;
+	int ret;
+
+	ret = vfio_device_open(device);
+	if (ret)
+		goto err_out;
+
 	/*
 	 * We can't use anon_inode_getfd() because we need to modify
 	 * the f_mode flags directly to allow more than just ioctls
@@ -885,12 +906,8 @@ static struct file *vfio_device_open(struct vfio_device *device)
 	return filep;
 
 err_close_device:
-	mutex_lock(&device->dev_set->lock);
-	if (device->open_count == 1)
-		vfio_device_last_close(device);
-err_unlock:
-	device->open_count--;
-	mutex_unlock(&device->dev_set->lock);
+	vfio_device_close(device);
+err_out:
 	return ERR_PTR(ret);
 }
 
@@ -918,7 +935,7 @@ static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
 		goto err_put_device;
 	}
 
-	filep = vfio_device_open(device);
+	filep = vfio_device_open_file(device);
 	if (IS_ERR(filep)) {
 		ret = PTR_ERR(filep);
 		goto err_put_fdno;
@@ -1105,12 +1122,7 @@ static int vfio_device_fops_release(struct inode *inode, struct file *filep)
 {
 	struct vfio_device *device = filep->private_data;
 
-	mutex_lock(&device->dev_set->lock);
-	vfio_assert_device_open(device);
-	if (device->open_count == 1)
-		vfio_device_last_close(device);
-	device->open_count--;
-	mutex_unlock(&device->dev_set->lock);
+	vfio_device_close(device);
 
 	vfio_device_put_registration(device);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 06/10] vfio: Swap order of vfio_device_container_register() and open_device()
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (4 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 05/10] vfio: Move device open/close code to be helpfers Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 07/10] vfio: Refactor vfio_device_first_open() and _last_close() Yi Liu
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This makes the DMA unmap callback registration to container be consistent
across the vfio iommufd compat mode and the legacy container mode.

In the vfio iommufd compat mode, this registration is done in the
vfio_iommufd_bind() when creating access which has an unmap callback. This
is prior to calling the open_device() op provided by mdev driver. While,
in the vfio legacy mode, the registration is done by
vfio_device_container_register(), and it is after open_device(). By swapping
the order of vfio_device_container_register() and open_device(), the two
modes have the consistent order for the DMA unmap callback registration.

This also prepares for further splitting group specific code.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 6ed5c2426464..00c961897d20 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -799,6 +799,7 @@ static int vfio_device_first_open(struct vfio_device *device)
 		ret = vfio_group_use_container(device->group);
 		if (ret)
 			goto err_module_put;
+		vfio_device_container_register(device);
 	} else if (device->group->iommufd) {
 		ret = vfio_iommufd_bind(device, device->group->iommufd);
 		if (ret)
@@ -811,17 +812,17 @@ static int vfio_device_first_open(struct vfio_device *device)
 		if (ret)
 			goto err_container;
 	}
-	if (device->group->container)
-		vfio_device_container_register(device);
 	mutex_unlock(&device->group->group_lock);
 	return 0;
 
 err_container:
 	device->kvm = NULL;
-	if (device->group->container)
+	if (device->group->container) {
+		vfio_device_container_unregister(device);
 		vfio_group_unuse_container(device->group);
-	else if (device->group->iommufd)
+	} else if (device->group->iommufd) {
 		vfio_iommufd_unbind(device);
+	}
 err_module_put:
 	mutex_unlock(&device->group->group_lock);
 	module_put(device->dev->driver->owner);
@@ -833,15 +834,15 @@ static void vfio_device_last_close(struct vfio_device *device)
 	lockdep_assert_held(&device->dev_set->lock);
 
 	mutex_lock(&device->group->group_lock);
-	if (device->group->container)
-		vfio_device_container_unregister(device);
 	if (device->ops->close_device)
 		device->ops->close_device(device);
 	device->kvm = NULL;
-	if (device->group->container)
+	if (device->group->container) {
+		vfio_device_container_unregister(device);
 		vfio_group_unuse_container(device->group);
-	else if (device->group->iommufd)
+	} else if (device->group->iommufd) {
 		vfio_iommufd_unbind(device);
+	}
 	mutex_unlock(&device->group->group_lock);
 	module_put(device->dev->driver->owner);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 07/10] vfio: Refactor vfio_device_first_open() and _last_close()
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (5 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 06/10] vfio: Swap order of vfio_device_container_register() and open_device() Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 08/10] vfio: Wrap vfio group module init/clean code into helpers Yi Liu
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

To prepare for moving group specific code out from vfio_main.c.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 91 +++++++++++++++++++++++++++-------------
 1 file changed, 63 insertions(+), 28 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 00c961897d20..71108f3707c3 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -775,14 +775,9 @@ static bool vfio_assert_device_open(struct vfio_device *device)
 	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
 }
 
-static int vfio_device_first_open(struct vfio_device *device)
+static int vfio_device_group_use_iommu(struct vfio_device *device)
 {
-	int ret;
-
-	lockdep_assert_held(&device->dev_set->lock);
-
-	if (!try_module_get(device->dev->driver->owner))
-		return -ENODEV;
+	int ret = 0;
 
 	/*
 	 * Here we pass the KVM pointer with the group under the lock.  If the
@@ -792,39 +787,86 @@ static int vfio_device_first_open(struct vfio_device *device)
 	mutex_lock(&device->group->group_lock);
 	if (!vfio_group_has_iommu(device->group)) {
 		ret = -EINVAL;
-		goto err_module_put;
+		goto out_unlock;
 	}
 
 	if (device->group->container) {
 		ret = vfio_group_use_container(device->group);
 		if (ret)
-			goto err_module_put;
+			goto out_unlock;
 		vfio_device_container_register(device);
 	} else if (device->group->iommufd) {
 		ret = vfio_iommufd_bind(device, device->group->iommufd);
-		if (ret)
-			goto err_module_put;
 	}
 
-	device->kvm = device->group->kvm;
+out_unlock:
+	mutex_unlock(&device->group->group_lock);
+	return ret;
+}
+
+static void vfio_device_group_unuse_iommu(struct vfio_device *device)
+{
+	mutex_lock(&device->group->group_lock);
+	if (device->group->container) {
+		vfio_device_container_unregister(device);
+		vfio_group_unuse_container(device->group);
+	} else if (device->group->iommufd) {
+		vfio_iommufd_unbind(device);
+	}
+	mutex_unlock(&device->group->group_lock);
+}
+
+static struct kvm *vfio_group_get_kvm(struct vfio_group *group)
+{
+	mutex_lock(&group->group_lock);
+	if (!group->kvm) {
+		mutex_unlock(&group->group_lock);
+		return NULL;
+	}
+	/* group_lock is released in the vfio_group_put_kvm() */
+	return group->kvm;
+}
+
+static void vfio_group_put_kvm(struct vfio_group *group)
+{
+	mutex_unlock(&group->group_lock);
+}
+
+static int vfio_device_first_open(struct vfio_device *device)
+{
+	struct kvm *kvm;
+	int ret;
+
+	lockdep_assert_held(&device->dev_set->lock);
+
+	if (!try_module_get(device->dev->driver->owner))
+		return -ENODEV;
+
+	ret = vfio_device_group_use_iommu(device);
+	if (ret)
+		goto err_module_put;
+
+	kvm = vfio_group_get_kvm(device->group);
+	if (!kvm) {
+		ret = -EINVAL;
+		goto err_unuse_iommu;
+	}
+
+	device->kvm = kvm;
 	if (device->ops->open_device) {
 		ret = device->ops->open_device(device);
 		if (ret)
 			goto err_container;
 	}
-	mutex_unlock(&device->group->group_lock);
+	vfio_group_put_kvm(device->group);
 	return 0;
 
 err_container:
 	device->kvm = NULL;
-	if (device->group->container) {
-		vfio_device_container_unregister(device);
-		vfio_group_unuse_container(device->group);
-	} else if (device->group->iommufd) {
-		vfio_iommufd_unbind(device);
-	}
+	vfio_group_put_kvm(device->group);
+err_unuse_iommu:
+	vfio_device_group_unuse_iommu(device);
 err_module_put:
-	mutex_unlock(&device->group->group_lock);
 	module_put(device->dev->driver->owner);
 	return ret;
 }
@@ -833,17 +875,10 @@ static void vfio_device_last_close(struct vfio_device *device)
 {
 	lockdep_assert_held(&device->dev_set->lock);
 
-	mutex_lock(&device->group->group_lock);
 	if (device->ops->close_device)
 		device->ops->close_device(device);
 	device->kvm = NULL;
-	if (device->group->container) {
-		vfio_device_container_unregister(device);
-		vfio_group_unuse_container(device->group);
-	} else if (device->group->iommufd) {
-		vfio_iommufd_unbind(device);
-	}
-	mutex_unlock(&device->group->group_lock);
+	vfio_device_group_unuse_iommu(device);
 	module_put(device->dev->driver->owner);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 08/10] vfio: Wrap vfio group module init/clean code into helpers
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (6 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 07/10] vfio: Refactor vfio_device_first_open() and _last_close() Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 15:01 ` [RFC 09/10] vfio: Refactor dma APIs for emulated devices Yi Liu
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This wraps the init/clean code of vfio group global variable to be helpers,
and prepares for further moving vfio group specific code into separate file.

As container is used by group, so vfio_container_init/cleanup() is moved
into vfio_group_init/cleanup().

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/vfio_main.c | 56 ++++++++++++++++++++++++++--------------
 1 file changed, 36 insertions(+), 20 deletions(-)

diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 71108f3707c3..cde258f4ea17 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -2053,12 +2053,11 @@ static char *vfio_devnode(struct device *dev, umode_t *mode)
 	return kasprintf(GFP_KERNEL, "vfio/%s", dev_name(dev));
 }
 
-static int __init vfio_init(void)
+static int __init vfio_group_init(void)
 {
 	int ret;
 
 	ida_init(&vfio.group_ida);
-	ida_init(&vfio.device_ida);
 	mutex_init(&vfio.group_lock);
 	INIT_LIST_HEAD(&vfio.group_list);
 
@@ -2075,24 +2074,12 @@ static int __init vfio_init(void)
 
 	vfio.class->devnode = vfio_devnode;
 
-	/* /sys/class/vfio-dev/vfioX */
-	vfio.device_class = class_create(THIS_MODULE, "vfio-dev");
-	if (IS_ERR(vfio.device_class)) {
-		ret = PTR_ERR(vfio.device_class);
-		goto err_dev_class;
-	}
-
 	ret = alloc_chrdev_region(&vfio.group_devt, 0, MINORMASK + 1, "vfio");
 	if (ret)
 		goto err_alloc_chrdev;
-
-	pr_info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
 	return 0;
 
 err_alloc_chrdev:
-	class_destroy(vfio.device_class);
-	vfio.device_class = NULL;
-err_dev_class:
 	class_destroy(vfio.class);
 	vfio.class = NULL;
 err_group_class:
@@ -2100,18 +2087,47 @@ static int __init vfio_init(void)
 	return ret;
 }
 
-static void __exit vfio_cleanup(void)
+static void vfio_group_cleanup(void)
 {
 	WARN_ON(!list_empty(&vfio.group_list));
-
-	ida_destroy(&vfio.device_ida);
 	ida_destroy(&vfio.group_ida);
 	unregister_chrdev_region(vfio.group_devt, MINORMASK + 1);
-	class_destroy(vfio.device_class);
-	vfio.device_class = NULL;
 	class_destroy(vfio.class);
-	vfio_container_cleanup();
 	vfio.class = NULL;
+	vfio_container_cleanup();
+}
+
+static int __init vfio_init(void)
+{
+	int ret;
+
+	ida_init(&vfio.device_ida);
+
+	ret = vfio_group_init();
+	if (ret)
+		return ret;
+
+	/* /sys/class/vfio-dev/vfioX */
+	vfio.device_class = class_create(THIS_MODULE, "vfio-dev");
+	if (IS_ERR(vfio.device_class)) {
+		ret = PTR_ERR(vfio.device_class);
+		goto err_dev_class;
+	}
+
+	pr_info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
+	return 0;
+
+err_dev_class:
+	vfio_group_cleanup();
+	return ret;
+}
+
+static void __exit vfio_cleanup(void)
+{
+	ida_destroy(&vfio.device_ida);
+	class_destroy(vfio.device_class);
+	vfio.device_class = NULL;
+	vfio_group_cleanup();
 	xa_destroy(&vfio_device_set_xa);
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 09/10] vfio: Refactor dma APIs for emulated devices
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (7 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 08/10] vfio: Wrap vfio group module init/clean code into helpers Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 16:51   ` Jason Gunthorpe
  2022-11-23 15:01 ` [RFC 10/10] vfio: Move vfio group specific code into group.c Yi Liu
  2022-11-23 18:41 ` [RFC 00/10] Move " Jason Gunthorpe
  10 siblings, 1 reply; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

To use group helpers instead of opening group related code in the
API. This prepares moving group specific code out of vfio_main.c.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
 drivers/vfio/container.c | 20 +++++++++++++-------
 drivers/vfio/vfio.h      | 32 ++++++++++++++++----------------
 drivers/vfio/vfio_main.c | 26 +++++++++++++++-----------
 3 files changed, 44 insertions(+), 34 deletions(-)

diff --git a/drivers/vfio/container.c b/drivers/vfio/container.c
index 6b362d97d682..e0d11ab7229a 100644
--- a/drivers/vfio/container.c
+++ b/drivers/vfio/container.c
@@ -540,11 +540,13 @@ void vfio_group_unuse_container(struct vfio_group *group)
 	fput(group->opened_file);
 }
 
-int vfio_container_pin_pages(struct vfio_container *container,
-			     struct iommu_group *iommu_group, dma_addr_t iova,
-			     int npage, int prot, struct page **pages)
+int vfio_group_container_pin_pages(struct vfio_group *group,
+				   dma_addr_t iova, int npage,
+				   int prot, struct page **pages)
 {
+	struct vfio_container *container = group->container;
 	struct vfio_iommu_driver *driver = container->iommu_driver;
+	struct iommu_group *iommu_group = group->iommu_group;
 
 	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
 		return -E2BIG;
@@ -555,9 +557,11 @@ int vfio_container_pin_pages(struct vfio_container *container,
 				      npage, prot, pages);
 }
 
-void vfio_container_unpin_pages(struct vfio_container *container,
-				dma_addr_t iova, int npage)
+void vfio_group_container_unpin_pages(struct vfio_group *group,
+				      dma_addr_t iova, int npage)
 {
+	struct vfio_container *container = group->container;
+
 	if (WARN_ON(npage <= 0 || npage > VFIO_PIN_PAGES_MAX_ENTRIES))
 		return;
 
@@ -565,9 +569,11 @@ void vfio_container_unpin_pages(struct vfio_container *container,
 						  npage);
 }
 
-int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
-			  void *data, size_t len, bool write)
+int vfio_group_container_dma_rw(struct vfio_group *group,
+				dma_addr_t iova, void *data,
+				size_t len, bool write)
 {
+	struct vfio_container *container = group->container;
 	struct vfio_iommu_driver *driver = container->iommu_driver;
 
 	if (unlikely(!driver || !driver->ops->dma_rw))
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index 3378714a7462..d6b6bc20406b 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -122,13 +122,14 @@ int vfio_container_attach_group(struct vfio_container *container,
 void vfio_group_detach_container(struct vfio_group *group);
 void vfio_device_container_register(struct vfio_device *device);
 void vfio_device_container_unregister(struct vfio_device *device);
-int vfio_container_pin_pages(struct vfio_container *container,
-			     struct iommu_group *iommu_group, dma_addr_t iova,
-			     int npage, int prot, struct page **pages);
-void vfio_container_unpin_pages(struct vfio_container *container,
-				dma_addr_t iova, int npage);
-int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
-			  void *data, size_t len, bool write);
+int vfio_group_container_pin_pages(struct vfio_group *group,
+				   dma_addr_t iova, int npage,
+				   int prot, struct page **pages);
+void vfio_group_container_unpin_pages(struct vfio_group *group,
+				      dma_addr_t iova, int npage);
+int vfio_group_container_dma_rw(struct vfio_group *group,
+				dma_addr_t iova, void *data,
+				size_t len, bool write);
 
 int __init vfio_container_init(void);
 void vfio_container_cleanup(void);
@@ -166,22 +167,21 @@ static inline void vfio_device_container_unregister(struct vfio_device *device)
 {
 }
 
-static inline int vfio_container_pin_pages(struct vfio_container *container,
-					   struct iommu_group *iommu_group,
-					   dma_addr_t iova, int npage, int prot,
-					   struct page **pages)
+static inline int vfio_group_container_pin_pages(struct vfio_group *group,
+						 dma_addr_t iova, int npage,
+						 int prot, struct page **pages)
 {
 	return -EOPNOTSUPP;
 }
 
-static inline void vfio_container_unpin_pages(struct vfio_container *container,
-					      dma_addr_t iova, int npage)
+static inline void vfio_group_container_unpin_pages(struct vfio_group *group,
+						    dma_addr_t iova, int npage)
 {
 }
 
-static inline int vfio_container_dma_rw(struct vfio_container *container,
-					dma_addr_t iova, void *data, size_t len,
-					bool write)
+static inline int vfio_group_container_dma_rw(struct vfio_group *group,
+					      dma_addr_t iova, void *data,
+					      size_t len, bool write)
 {
 	return -EOPNOTSUPP;
 }
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index cde258f4ea17..b6d3cb35a523 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -1925,6 +1925,11 @@ int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr, int num_irqs,
 }
 EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
 
+static bool vfio_group_has_container(struct vfio_group *group)
+{
+	return group->container;
+}
+
 /*
  * Pin contiguous user pages and return their associated host pages for local
  * domain only.
@@ -1937,7 +1942,7 @@ EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
  * Return error or number of pages pinned.
  *
  * A driver may only call this function if the vfio_device was created
- * by vfio_register_emulated_iommu_dev() due to vfio_container_pin_pages().
+ * by vfio_register_emulated_iommu_dev() due to vfio_group_container_pin_pages().
  */
 int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
 		   int npage, int prot, struct page **pages)
@@ -1945,10 +1950,9 @@ int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
 	/* group->container cannot change while a vfio device is open */
 	if (!pages || !npage || WARN_ON(!vfio_assert_device_open(device)))
 		return -EINVAL;
-	if (device->group->container)
-		return vfio_container_pin_pages(device->group->container,
-						device->group->iommu_group,
-						iova, npage, prot, pages);
+	if (vfio_group_has_container(device->group))
+		return vfio_group_container_pin_pages(device->group, iova,
+						      npage, prot, pages);
 	if (device->iommufd_access) {
 		int ret;
 
@@ -1984,9 +1988,9 @@ void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova, int npage)
 	if (WARN_ON(!vfio_assert_device_open(device)))
 		return;
 
-	if (device->group->container) {
-		vfio_container_unpin_pages(device->group->container, iova,
-					   npage);
+	if (vfio_group_has_container(device->group)) {
+		vfio_group_container_unpin_pages(device->group, iova,
+						 npage);
 		return;
 	}
 	if (device->iommufd_access) {
@@ -2023,9 +2027,9 @@ int vfio_dma_rw(struct vfio_device *device, dma_addr_t iova, void *data,
 	if (!data || len <= 0 || !vfio_assert_device_open(device))
 		return -EINVAL;
 
-	if (device->group->container)
-		return vfio_container_dma_rw(device->group->container, iova,
-					     data, len, write);
+	if (vfio_group_has_container(device->group))
+		return vfio_group_container_dma_rw(device->group, iova,
+						   data, len, write);
 
 	if (device->iommufd_access) {
 		unsigned int flags = 0;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC 10/10] vfio: Move vfio group specific code into group.c
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (8 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 09/10] vfio: Refactor dma APIs for emulated devices Yi Liu
@ 2022-11-23 15:01 ` Yi Liu
  2022-11-23 18:37   ` Jason Gunthorpe
  2022-11-23 18:41 ` [RFC 00/10] Move " Jason Gunthorpe
  10 siblings, 1 reply; 17+ messages in thread
From: Yi Liu @ 2022-11-23 15:01 UTC (permalink / raw)
  To: alex.williamson, jgg
  Cc: kevin.tian, eric.auger, cohuck, nicolinc, yi.y.sun, chao.p.peng,
	mjrosato, kvm, yi.l.liu

This prepares for compiling out vfio group after vfio device cdev is
added. No vfio_group decode code should be in vfio_main.c.

Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
 drivers/vfio/Makefile    |   1 +
 drivers/vfio/group.c     | 834 +++++++++++++++++++++++++++++++++++++++
 drivers/vfio/vfio.h      |  20 +
 drivers/vfio/vfio_main.c | 806 +------------------------------------
 4 files changed, 858 insertions(+), 803 deletions(-)
 create mode 100644 drivers/vfio/group.c

diff --git a/drivers/vfio/Makefile b/drivers/vfio/Makefile
index b953517dc70f..3783db7e8082 100644
--- a/drivers/vfio/Makefile
+++ b/drivers/vfio/Makefile
@@ -4,6 +4,7 @@ vfio_virqfd-y := virqfd.o
 obj-$(CONFIG_VFIO) += vfio.o
 
 vfio-y += vfio_main.o \
+	  group.o \
 	  iova_bitmap.o
 vfio-$(CONFIG_IOMMUFD) += iommufd.o
 vfio-$(CONFIG_VFIO_CONTAINER) += container.o
diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
new file mode 100644
index 000000000000..d8ef098c1f74
--- /dev/null
+++ b/drivers/vfio/group.c
@@ -0,0 +1,834 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VFIO core
+ *
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ *     Author: Alex Williamson <alex.williamson@redhat.com>
+ *
+ * Derived from original vfio:
+ * Copyright 2010 Cisco Systems, Inc.  All rights reserved.
+ * Author: Tom Lyon, pugs@cisco.com
+ */
+
+#include <linux/file.h>
+#include <linux/vfio.h>
+#include <linux/iommufd.h>
+#include "vfio.h"
+
+static struct vfio {
+	struct class			*class;
+	struct list_head		group_list;
+	struct mutex			group_lock; /* locks group_list */
+	struct ida			group_ida;
+	dev_t				group_devt;
+} vfio;
+
+/*
+ * VFIO Group fd, /dev/vfio/$GROUP
+ */
+static bool vfio_group_has_iommu(struct vfio_group *group)
+{
+	lockdep_assert_held(&group->group_lock);
+	/*
+	 * There can only be users if there is a container, and if there is a
+	 * container there must be users.
+	 */
+	WARN_ON(!group->container != !group->container_users);
+
+	return group->container || group->iommufd;
+}
+
+/*
+ * VFIO_GROUP_UNSET_CONTAINER should fail if there are other users or
+ * if there was no container to unset.  Since the ioctl is called on
+ * the group, we know that still exists, therefore the only valid
+ * transition here is 1->0.
+ */
+static int vfio_group_ioctl_unset_container(struct vfio_group *group)
+{
+	int ret = 0;
+
+	mutex_lock(&group->group_lock);
+	if (!vfio_group_has_iommu(group)) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+	if (group->container) {
+		if (group->container_users != 1) {
+			ret = -EBUSY;
+			goto out_unlock;
+		}
+		vfio_group_detach_container(group);
+	}
+	if (group->iommufd) {
+		iommufd_ctx_put(group->iommufd);
+		group->iommufd = NULL;
+	}
+
+out_unlock:
+	mutex_unlock(&group->group_lock);
+	return ret;
+}
+
+static int vfio_group_ioctl_set_container(struct vfio_group *group,
+					  int __user *arg)
+{
+	struct vfio_container *container;
+	struct iommufd_ctx *iommufd;
+	struct fd f;
+	int ret;
+	int fd;
+
+	if (get_user(fd, arg))
+		return -EFAULT;
+
+	f = fdget(fd);
+	if (!f.file)
+		return -EBADF;
+
+	mutex_lock(&group->group_lock);
+	if (vfio_group_has_iommu(group)) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+	if (!group->iommu_group) {
+		ret = -ENODEV;
+		goto out_unlock;
+	}
+
+	container = vfio_container_from_file(f.file);
+	if (container) {
+		ret = vfio_container_attach_group(container, group);
+		goto out_unlock;
+	}
+
+	iommufd = iommufd_ctx_from_file(f.file);
+	if (!IS_ERR(iommufd)) {
+		u32 ioas_id;
+
+		ret = iommufd_vfio_compat_ioas_id(iommufd, &ioas_id);
+		if (ret) {
+			iommufd_ctx_put(group->iommufd);
+			goto out_unlock;
+		}
+
+		group->iommufd = iommufd;
+		goto out_unlock;
+	}
+
+	/* The FD passed is not recognized. */
+	ret = -EBADFD;
+
+out_unlock:
+	mutex_unlock(&group->group_lock);
+	fdput(f);
+	return ret;
+}
+
+static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group,
+						     char *buf)
+{
+	struct vfio_device *it, *device = ERR_PTR(-ENODEV);
+
+	mutex_lock(&group->device_lock);
+	list_for_each_entry(it, &group->device_list, group_next) {
+		int ret;
+
+		if (it->ops->match) {
+			ret = it->ops->match(it, buf);
+			if (ret < 0) {
+				device = ERR_PTR(ret);
+				break;
+			}
+		} else {
+			ret = !strcmp(dev_name(it->dev), buf);
+		}
+
+		if (ret && vfio_device_try_get_registration(it)) {
+			device = it;
+			break;
+		}
+	}
+	mutex_unlock(&group->device_lock);
+
+	return device;
+}
+
+static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
+					  char __user *arg)
+{
+	struct vfio_device *device;
+	struct file *filep;
+	char *buf;
+	int fdno;
+	int ret;
+
+	buf = strndup_user(arg, PAGE_SIZE);
+	if (IS_ERR(buf))
+		return PTR_ERR(buf);
+
+	device = vfio_device_get_from_name(group, buf);
+	kfree(buf);
+	if (IS_ERR(device))
+		return PTR_ERR(device);
+
+	fdno = get_unused_fd_flags(O_CLOEXEC);
+	if (fdno < 0) {
+		ret = fdno;
+		goto err_put_device;
+	}
+
+	filep = vfio_device_open_file(device);
+	if (IS_ERR(filep)) {
+		ret = PTR_ERR(filep);
+		goto err_put_fdno;
+	}
+
+	if (group->type == VFIO_NO_IOMMU)
+		dev_warn(device->dev, "vfio-noiommu device opened by user "
+			 "(%s:%d)\n", current->comm, task_pid_nr(current));
+
+	fd_install(fdno, filep);
+	return fdno;
+
+err_put_fdno:
+	put_unused_fd(fdno);
+err_put_device:
+	vfio_device_put_registration(device);
+	return ret;
+}
+
+static int vfio_group_ioctl_get_status(struct vfio_group *group,
+				       struct vfio_group_status __user *arg)
+{
+	unsigned long minsz = offsetofend(struct vfio_group_status, flags);
+	struct vfio_group_status status;
+
+	if (copy_from_user(&status, arg, minsz))
+		return -EFAULT;
+
+	if (status.argsz < minsz)
+		return -EINVAL;
+
+	status.flags = 0;
+
+	mutex_lock(&group->group_lock);
+	if (!group->iommu_group) {
+		mutex_unlock(&group->group_lock);
+		return -ENODEV;
+	}
+
+	/*
+	 * With the container FD the iommu_group_claim_dma_owner() is done
+	 * during SET_CONTAINER but for IOMMFD this is done during
+	 * VFIO_GROUP_GET_DEVICE_FD. Meaning that with iommufd
+	 * VFIO_GROUP_FLAGS_VIABLE could be set but GET_DEVICE_FD will fail due
+	 * to viability.
+	 */
+	if (vfio_group_has_iommu(group))
+		status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET |
+				VFIO_GROUP_FLAGS_VIABLE;
+	else if (!iommu_group_dma_owner_claimed(group->iommu_group))
+		status.flags |= VFIO_GROUP_FLAGS_VIABLE;
+	mutex_unlock(&group->group_lock);
+
+	if (copy_to_user(arg, &status, minsz))
+		return -EFAULT;
+	return 0;
+}
+
+static long vfio_group_fops_unl_ioctl(struct file *filep,
+				      unsigned int cmd, unsigned long arg)
+{
+	struct vfio_group *group = filep->private_data;
+	void __user *uarg = (void __user *)arg;
+
+	switch (cmd) {
+	case VFIO_GROUP_GET_DEVICE_FD:
+		return vfio_group_ioctl_get_device_fd(group, uarg);
+	case VFIO_GROUP_GET_STATUS:
+		return vfio_group_ioctl_get_status(group, uarg);
+	case VFIO_GROUP_SET_CONTAINER:
+		return vfio_group_ioctl_set_container(group, uarg);
+	case VFIO_GROUP_UNSET_CONTAINER:
+		return vfio_group_ioctl_unset_container(group);
+	default:
+		return -ENOTTY;
+	}
+}
+
+static int vfio_group_fops_open(struct inode *inode, struct file *filep)
+{
+	struct vfio_group *group =
+		container_of(inode->i_cdev, struct vfio_group, cdev);
+	int ret;
+
+	mutex_lock(&group->group_lock);
+
+	/*
+	 * drivers can be zero if this races with vfio_device_remove_group(), it
+	 * will be stable at 0 under the group rwsem
+	 */
+	if (refcount_read(&group->drivers) == 0) {
+		ret = -ENODEV;
+		goto out_unlock;
+	}
+
+	if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO)) {
+		ret = -EPERM;
+		goto out_unlock;
+	}
+
+	/*
+	 * Do we need multiple instances of the group open?  Seems not.
+	 */
+	if (group->opened_file) {
+		ret = -EBUSY;
+		goto out_unlock;
+	}
+	group->opened_file = filep;
+	filep->private_data = group;
+	ret = 0;
+out_unlock:
+	mutex_unlock(&group->group_lock);
+	return ret;
+}
+
+static int vfio_group_fops_release(struct inode *inode, struct file *filep)
+{
+	struct vfio_group *group = filep->private_data;
+
+	filep->private_data = NULL;
+
+	mutex_lock(&group->group_lock);
+	/*
+	 * Device FDs hold a group file reference, therefore the group release
+	 * is only called when there are no open devices.
+	 */
+	WARN_ON(group->notifier.head);
+	if (group->container)
+		vfio_group_detach_container(group);
+	if (group->iommufd) {
+		iommufd_ctx_put(group->iommufd);
+		group->iommufd = NULL;
+	}
+	group->opened_file = NULL;
+	mutex_unlock(&group->group_lock);
+	return 0;
+}
+
+static const struct file_operations vfio_group_fops = {
+	.owner		= THIS_MODULE,
+	.unlocked_ioctl	= vfio_group_fops_unl_ioctl,
+	.compat_ioctl	= compat_ptr_ioctl,
+	.open		= vfio_group_fops_open,
+	.release	= vfio_group_fops_release,
+};
+
+/*
+ * Group objects - create, release, get, put, search
+ */
+static struct vfio_group *
+vfio_group_find_from_iommu(struct iommu_group *iommu_group)
+{
+	struct vfio_group *group;
+
+	lockdep_assert_held(&vfio.group_lock);
+
+	/*
+	 * group->iommu_group from the vfio.group_list cannot be NULL
+	 * under the vfio.group_lock.
+	 */
+	list_for_each_entry(group, &vfio.group_list, vfio_next) {
+		if (group->iommu_group == iommu_group)
+			return group;
+	}
+	return NULL;
+}
+
+static void vfio_group_release(struct device *dev)
+{
+	struct vfio_group *group = container_of(dev, struct vfio_group, dev);
+
+	mutex_destroy(&group->device_lock);
+	mutex_destroy(&group->group_lock);
+	WARN_ON(group->iommu_group);
+	ida_free(&vfio.group_ida, MINOR(group->dev.devt));
+	kfree(group);
+}
+
+static struct vfio_group *vfio_group_alloc(struct iommu_group *iommu_group,
+					   enum vfio_group_type type)
+{
+	struct vfio_group *group;
+	int minor;
+
+	group = kzalloc(sizeof(*group), GFP_KERNEL);
+	if (!group)
+		return ERR_PTR(-ENOMEM);
+
+	minor = ida_alloc_max(&vfio.group_ida, MINORMASK, GFP_KERNEL);
+	if (minor < 0) {
+		kfree(group);
+		return ERR_PTR(minor);
+	}
+
+	device_initialize(&group->dev);
+	group->dev.devt = MKDEV(MAJOR(vfio.group_devt), minor);
+	group->dev.class = vfio.class;
+	group->dev.release = vfio_group_release;
+	cdev_init(&group->cdev, &vfio_group_fops);
+	group->cdev.owner = THIS_MODULE;
+
+	refcount_set(&group->drivers, 1);
+	mutex_init(&group->group_lock);
+	INIT_LIST_HEAD(&group->device_list);
+	mutex_init(&group->device_lock);
+	group->iommu_group = iommu_group;
+	/* put in vfio_group_release() */
+	iommu_group_ref_get(iommu_group);
+	group->type = type;
+	BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
+
+	return group;
+}
+
+static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
+		enum vfio_group_type type)
+{
+	struct vfio_group *group;
+	struct vfio_group *ret;
+	int err;
+
+	lockdep_assert_held(&vfio.group_lock);
+
+	group = vfio_group_alloc(iommu_group, type);
+	if (IS_ERR(group))
+		return group;
+
+	err = dev_set_name(&group->dev, "%s%d",
+			   group->type == VFIO_NO_IOMMU ? "noiommu-" : "",
+			   iommu_group_id(iommu_group));
+	if (err) {
+		ret = ERR_PTR(err);
+		goto err_put;
+	}
+
+	err = cdev_device_add(&group->cdev, &group->dev);
+	if (err) {
+		ret = ERR_PTR(err);
+		goto err_put;
+	}
+
+	list_add(&group->vfio_next, &vfio.group_list);
+
+	return group;
+
+err_put:
+	put_device(&group->dev);
+	return ret;
+}
+
+void vfio_device_remove_group(struct vfio_device *device)
+{
+	struct vfio_group *group = device->group;
+	struct iommu_group *iommu_group;
+
+	if (group->type == VFIO_NO_IOMMU || group->type == VFIO_EMULATED_IOMMU)
+		iommu_group_remove_device(device->dev);
+
+	/* Pairs with vfio_create_group() / vfio_group_get_from_iommu() */
+	if (!refcount_dec_and_mutex_lock(&group->drivers, &vfio.group_lock))
+		return;
+	list_del(&group->vfio_next);
+
+	/*
+	 * We could concurrently probe another driver in the group that might
+	 * race vfio_device_remove_group() with vfio_get_group(), so we have to
+	 * ensure that the sysfs is all cleaned up under lock otherwise the
+	 * cdev_device_add() will fail due to the name aready existing.
+	 */
+	cdev_device_del(&group->cdev, &group->dev);
+
+	mutex_lock(&group->group_lock);
+	/*
+	 * These data structures all have paired operations that can only be
+	 * undone when the caller holds a live reference on the device. Since
+	 * all pairs must be undone these WARN_ON's indicate some caller did not
+	 * properly hold the group reference.
+	 */
+	WARN_ON(!list_empty(&group->device_list));
+	WARN_ON(group->notifier.head);
+
+	/*
+	 * Revoke all users of group->iommu_group. At this point we know there
+	 * are no devices active because we are unplugging the last one. Setting
+	 * iommu_group to NULL blocks all new users.
+	 */
+	if (group->container)
+		vfio_group_detach_container(group);
+	iommu_group = group->iommu_group;
+	group->iommu_group = NULL;
+	mutex_unlock(&group->group_lock);
+	mutex_unlock(&vfio.group_lock);
+
+	iommu_group_put(iommu_group);
+	put_device(&group->dev);
+}
+
+struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
+					    enum vfio_group_type type)
+{
+	struct iommu_group *iommu_group;
+	struct vfio_group *group;
+	int ret;
+
+	iommu_group = iommu_group_alloc();
+	if (IS_ERR(iommu_group))
+		return ERR_CAST(iommu_group);
+
+	ret = iommu_group_set_name(iommu_group, "vfio-noiommu");
+	if (ret)
+		goto out_put_group;
+	ret = iommu_group_add_device(iommu_group, dev);
+	if (ret)
+		goto out_put_group;
+
+	mutex_lock(&vfio.group_lock);
+	group = vfio_create_group(iommu_group, type);
+	mutex_unlock(&vfio.group_lock);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		goto out_remove_device;
+	}
+	iommu_group_put(iommu_group);
+	return group;
+
+out_remove_device:
+	iommu_group_remove_device(dev);
+out_put_group:
+	iommu_group_put(iommu_group);
+	return ERR_PTR(ret);
+}
+
+static bool vfio_group_has_device(struct vfio_group *group, struct device *dev)
+{
+	struct vfio_device *device;
+
+	mutex_lock(&group->device_lock);
+	list_for_each_entry(device, &group->device_list, group_next) {
+		if (device->dev == dev) {
+			mutex_unlock(&group->device_lock);
+			return true;
+		}
+	}
+	mutex_unlock(&group->device_lock);
+	return false;
+}
+
+struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
+{
+	struct iommu_group *iommu_group;
+	struct vfio_group *group;
+
+	iommu_group = iommu_group_get(dev);
+	if (!iommu_group && vfio_noiommu) {
+		/*
+		 * With noiommu enabled, create an IOMMU group for devices that
+		 * don't already have one, implying no IOMMU hardware/driver
+		 * exists.  Taint the kernel because we're about to give a DMA
+		 * capable device to a user without IOMMU protection.
+		 */
+		group = vfio_noiommu_group_alloc(dev, VFIO_NO_IOMMU);
+		if (!IS_ERR(group)) {
+			add_taint(TAINT_USER, LOCKDEP_STILL_OK);
+			dev_warn(dev, "Adding kernel taint for vfio-noiommu group on device\n");
+		}
+		return group;
+	}
+
+	if (!iommu_group)
+		return ERR_PTR(-EINVAL);
+
+	/*
+	 * VFIO always sets IOMMU_CACHE because we offer no way for userspace to
+	 * restore cache coherency. It has to be checked here because it is only
+	 * valid for cases where we are using iommu groups.
+	 */
+	if (!device_iommu_capable(dev, IOMMU_CAP_CACHE_COHERENCY)) {
+		iommu_group_put(iommu_group);
+		return ERR_PTR(-EINVAL);
+	}
+
+	mutex_lock(&vfio.group_lock);
+	group = vfio_group_find_from_iommu(iommu_group);
+	if (group) {
+		if (WARN_ON(vfio_group_has_device(group, dev)))
+			group = ERR_PTR(-EINVAL);
+		else
+			refcount_inc(&group->drivers);
+	} else {
+		group = vfio_create_group(iommu_group, VFIO_IOMMU);
+	}
+	mutex_unlock(&vfio.group_lock);
+
+	/* The vfio_group holds a reference to the iommu_group */
+	iommu_group_put(iommu_group);
+	return group;
+}
+
+void vfio_device_group_register(struct vfio_device *device)
+{
+	mutex_lock(&device->group->device_lock);
+	list_add(&device->group_next, &device->group->device_list);
+	mutex_unlock(&device->group->device_lock);
+}
+
+void vfio_device_group_unregister(struct vfio_device *device)
+{
+	mutex_lock(&device->group->device_lock);
+	list_del(&device->group_next);
+	mutex_unlock(&device->group->device_lock);
+}
+
+int vfio_device_group_use_iommu(struct vfio_device *device)
+{
+	int ret = 0;
+
+	/*
+	 * Here we pass the KVM pointer with the group under the lock.  If the
+	 * device driver will use it, it must obtain a reference and release it
+	 * during close_device.
+	 */
+	mutex_lock(&device->group->group_lock);
+	if (!vfio_group_has_iommu(device->group)) {
+		ret = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (device->group->container) {
+		ret = vfio_group_use_container(device->group);
+		if (ret)
+			goto out_unlock;
+		vfio_device_container_register(device);
+	} else if (device->group->iommufd) {
+		ret = vfio_iommufd_bind(device, device->group->iommufd);
+	}
+
+out_unlock:
+	mutex_unlock(&device->group->group_lock);
+	return ret;
+}
+
+void vfio_device_group_unuse_iommu(struct vfio_device *device)
+{
+	mutex_lock(&device->group->group_lock);
+	if (device->group->container) {
+		vfio_device_container_unregister(device);
+		vfio_group_unuse_container(device->group);
+	} else if (device->group->iommufd) {
+		vfio_iommufd_unbind(device);
+	}
+	mutex_unlock(&device->group->group_lock);
+}
+
+struct kvm *vfio_group_get_kvm(struct vfio_group *group)
+{
+	mutex_lock(&group->group_lock);
+	if (!group->kvm) {
+		mutex_unlock(&group->group_lock);
+		return NULL;
+	}
+	/* group_lock is released in the vfio_group_put_kvm() */
+	return group->kvm;
+}
+
+void vfio_group_put_kvm(struct vfio_group *group)
+{
+	mutex_unlock(&group->group_lock);
+}
+
+void vfio_device_group_finalize_open(struct vfio_device *device)
+{
+	mutex_lock(&device->group->group_lock);
+	if (device->group->container)
+		vfio_device_container_register(device);
+	mutex_unlock(&device->group->group_lock);
+}
+
+void vfio_device_group_abort_open(struct vfio_device *device)
+{
+	mutex_lock(&device->group->group_lock);
+	if (device->group->container)
+		vfio_device_container_unregister(device);
+	mutex_unlock(&device->group->group_lock);
+}
+
+/**
+ * vfio_file_iommu_group - Return the struct iommu_group for the vfio group file
+ * @file: VFIO group file
+ *
+ * The returned iommu_group is valid as long as a ref is held on the file. This
+ * returns a reference on the group. This function is deprecated, only the SPAPR
+ * path in kvm should call it.
+ */
+struct iommu_group *vfio_file_iommu_group(struct file *file)
+{
+	struct vfio_group *group = file->private_data;
+	struct iommu_group *iommu_group = NULL;
+
+	if (!IS_ENABLED(CONFIG_SPAPR_TCE_IOMMU))
+		return NULL;
+
+	if (!vfio_file_is_group(file))
+		return NULL;
+
+	mutex_lock(&group->group_lock);
+	if (group->iommu_group) {
+		iommu_group = group->iommu_group;
+		iommu_group_ref_get(iommu_group);
+	}
+	mutex_unlock(&group->group_lock);
+	return iommu_group;
+}
+EXPORT_SYMBOL_GPL(vfio_file_iommu_group);
+
+/**
+ * vfio_file_is_group - True if the file is usable with VFIO aPIS
+ * @file: VFIO group file
+ */
+bool vfio_file_is_group(struct file *file)
+{
+	return file->f_op == &vfio_group_fops;
+}
+EXPORT_SYMBOL_GPL(vfio_file_is_group);
+
+/**
+ * vfio_file_enforced_coherent - True if the DMA associated with the VFIO file
+ *        is always CPU cache coherent
+ * @file: VFIO group file
+ *
+ * Enforced coherency means that the IOMMU ignores things like the PCIe no-snoop
+ * bit in DMA transactions. A return of false indicates that the user has
+ * rights to access additional instructions such as wbinvd on x86.
+ */
+bool vfio_file_enforced_coherent(struct file *file)
+{
+	struct vfio_group *group = file->private_data;
+	struct vfio_device *device;
+	bool ret = true;
+
+	if (!vfio_file_is_group(file))
+		return true;
+
+	/*
+	 * If the device does not have IOMMU_CAP_ENFORCE_CACHE_COHERENCY then
+	 * any domain later attached to it will also not support it. If the cap
+	 * is set then the iommu_domain eventually attached to the device/group
+	 * must use a domain with enforce_cache_coherency().
+	 */
+	mutex_lock(&group->device_lock);
+	list_for_each_entry(device, &group->device_list, group_next) {
+		if (!device_iommu_capable(device->dev,
+					  IOMMU_CAP_ENFORCE_CACHE_COHERENCY)) {
+			ret = false;
+			break;
+		}
+	}
+	mutex_unlock(&group->device_lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(vfio_file_enforced_coherent);
+
+/**
+ * vfio_file_set_kvm - Link a kvm with VFIO drivers
+ * @file: VFIO group file
+ * @kvm: KVM to link
+ *
+ * When a VFIO device is first opened the KVM will be available in
+ * device->kvm if one was associated with the group.
+ */
+void vfio_file_set_kvm(struct file *file, struct kvm *kvm)
+{
+	struct vfio_group *group = file->private_data;
+
+	if (!vfio_file_is_group(file))
+		return;
+
+	mutex_lock(&group->group_lock);
+	group->kvm = kvm;
+	mutex_unlock(&group->group_lock);
+}
+EXPORT_SYMBOL_GPL(vfio_file_set_kvm);
+
+/**
+ * vfio_file_has_dev - True if the VFIO file is a handle for device
+ * @file: VFIO file to check
+ * @device: Device that must be part of the file
+ *
+ * Returns true if given file has permission to manipulate the given device.
+ */
+bool vfio_file_has_dev(struct file *file, struct vfio_device *device)
+{
+	struct vfio_group *group = file->private_data;
+
+	if (!vfio_file_is_group(file))
+		return false;
+
+	return group == device->group;
+}
+EXPORT_SYMBOL_GPL(vfio_file_has_dev);
+
+bool vfio_group_has_container(struct vfio_group *group)
+{
+	return group->container;
+}
+
+static char *vfio_devnode(struct device *dev, umode_t *mode)
+{
+	return kasprintf(GFP_KERNEL, "vfio/%s", dev_name(dev));
+}
+
+int __init vfio_group_init(void)
+{
+	int ret;
+
+	ida_init(&vfio.group_ida);
+	mutex_init(&vfio.group_lock);
+	INIT_LIST_HEAD(&vfio.group_list);
+
+	ret = vfio_container_init();
+	if (ret)
+		return ret;
+
+	/* /dev/vfio/$GROUP */
+	vfio.class = class_create(THIS_MODULE, "vfio");
+	if (IS_ERR(vfio.class)) {
+		ret = PTR_ERR(vfio.class);
+		goto err_group_class;
+	}
+
+	vfio.class->devnode = vfio_devnode;
+
+	ret = alloc_chrdev_region(&vfio.group_devt, 0, MINORMASK + 1, "vfio");
+	if (ret)
+		goto err_alloc_chrdev;
+	return 0;
+
+err_alloc_chrdev:
+	class_destroy(vfio.class);
+	vfio.class = NULL;
+err_group_class:
+	vfio_container_cleanup();
+	return ret;
+}
+
+void vfio_group_cleanup(void)
+{
+	WARN_ON(!list_empty(&vfio.group_list));
+	ida_destroy(&vfio.group_ida);
+	unregister_chrdev_region(vfio.group_devt, MINORMASK + 1);
+	class_destroy(vfio.class);
+	vfio.class = NULL;
+	vfio_container_cleanup();
+}
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index d6b6bc20406b..670c9c5a55f1 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -15,6 +15,10 @@ struct iommu_group;
 struct vfio_device;
 struct vfio_container;
 
+void vfio_device_put_registration(struct vfio_device *device);
+bool vfio_device_try_get_registration(struct vfio_device *device);
+struct file *vfio_device_open_file(struct vfio_device *device);
+
 enum vfio_group_type {
 	/*
 	 * Physical device with IOMMU backing.
@@ -66,6 +70,22 @@ struct vfio_group {
 	struct iommufd_ctx		*iommufd;
 };
 
+void vfio_device_remove_group(struct vfio_device *device);
+struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
+					    enum vfio_group_type type);
+struct vfio_group *vfio_group_find_or_alloc(struct device *dev);
+void vfio_device_group_register(struct vfio_device *device);
+void vfio_device_group_unregister(struct vfio_device *device);
+int vfio_device_group_use_iommu(struct vfio_device *device);
+void vfio_device_group_unuse_iommu(struct vfio_device *device);
+struct kvm *vfio_group_get_kvm(struct vfio_group *group);
+void vfio_group_put_kvm(struct vfio_group *group);
+void vfio_device_group_finalize_open(struct vfio_device *device);
+void vfio_device_group_abort_open(struct vfio_device *device);
+bool vfio_group_has_container(struct vfio_group *group);
+int __init vfio_group_init(void);
+void vfio_group_cleanup(void);
+
 #if IS_ENABLED(CONFIG_VFIO_CONTAINER)
 /* events for the backend driver notify callback */
 enum vfio_iommu_notify_type {
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index b6d3cb35a523..a7b966b4f3fc 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -43,11 +43,6 @@
 #define DRIVER_DESC	"VFIO - User Level meta-driver"
 
 static struct vfio {
-	struct class			*class;
-	struct list_head		group_list;
-	struct mutex			group_lock; /* locks group_list */
-	struct ida			group_ida;
-	dev_t				group_devt;
 	struct class			*device_class;
 	struct ida			device_ida;
 } vfio;
@@ -56,7 +51,6 @@ bool vfio_allow_unsafe_interrupts;
 EXPORT_SYMBOL_GPL(vfio_allow_unsafe_interrupts);
 
 static DEFINE_XARRAY(vfio_device_set_xa);
-static const struct file_operations vfio_group_fops;
 
 int vfio_assign_device_set(struct vfio_device *device, void *set_id)
 {
@@ -142,168 +136,17 @@ unsigned int vfio_device_set_open_count(struct vfio_device_set *dev_set)
 }
 EXPORT_SYMBOL_GPL(vfio_device_set_open_count);
 
-/*
- * Group objects - create, release, get, put, search
- */
-static struct vfio_group *
-vfio_group_find_from_iommu(struct iommu_group *iommu_group)
-{
-	struct vfio_group *group;
-
-	lockdep_assert_held(&vfio.group_lock);
-
-	/*
-	 * group->iommu_group from the vfio.group_list cannot be NULL
-	 * under the vfio.group_lock.
-	 */
-	list_for_each_entry(group, &vfio.group_list, vfio_next) {
-		if (group->iommu_group == iommu_group)
-			return group;
-	}
-	return NULL;
-}
-
-static void vfio_group_release(struct device *dev)
-{
-	struct vfio_group *group = container_of(dev, struct vfio_group, dev);
-
-	mutex_destroy(&group->device_lock);
-	mutex_destroy(&group->group_lock);
-	WARN_ON(group->iommu_group);
-	ida_free(&vfio.group_ida, MINOR(group->dev.devt));
-	kfree(group);
-}
-
-static struct vfio_group *vfio_group_alloc(struct iommu_group *iommu_group,
-					   enum vfio_group_type type)
-{
-	struct vfio_group *group;
-	int minor;
-
-	group = kzalloc(sizeof(*group), GFP_KERNEL);
-	if (!group)
-		return ERR_PTR(-ENOMEM);
-
-	minor = ida_alloc_max(&vfio.group_ida, MINORMASK, GFP_KERNEL);
-	if (minor < 0) {
-		kfree(group);
-		return ERR_PTR(minor);
-	}
-
-	device_initialize(&group->dev);
-	group->dev.devt = MKDEV(MAJOR(vfio.group_devt), minor);
-	group->dev.class = vfio.class;
-	group->dev.release = vfio_group_release;
-	cdev_init(&group->cdev, &vfio_group_fops);
-	group->cdev.owner = THIS_MODULE;
-
-	refcount_set(&group->drivers, 1);
-	mutex_init(&group->group_lock);
-	INIT_LIST_HEAD(&group->device_list);
-	mutex_init(&group->device_lock);
-	group->iommu_group = iommu_group;
-	/* put in vfio_group_release() */
-	iommu_group_ref_get(iommu_group);
-	group->type = type;
-	BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
-
-	return group;
-}
-
-static struct vfio_group *vfio_create_group(struct iommu_group *iommu_group,
-		enum vfio_group_type type)
-{
-	struct vfio_group *group;
-	struct vfio_group *ret;
-	int err;
-
-	lockdep_assert_held(&vfio.group_lock);
-
-	group = vfio_group_alloc(iommu_group, type);
-	if (IS_ERR(group))
-		return group;
-
-	err = dev_set_name(&group->dev, "%s%d",
-			   group->type == VFIO_NO_IOMMU ? "noiommu-" : "",
-			   iommu_group_id(iommu_group));
-	if (err) {
-		ret = ERR_PTR(err);
-		goto err_put;
-	}
-
-	err = cdev_device_add(&group->cdev, &group->dev);
-	if (err) {
-		ret = ERR_PTR(err);
-		goto err_put;
-	}
-
-	list_add(&group->vfio_next, &vfio.group_list);
-
-	return group;
-
-err_put:
-	put_device(&group->dev);
-	return ret;
-}
-
-static void vfio_device_remove_group(struct vfio_device *device)
-{
-	struct vfio_group *group = device->group;
-	struct iommu_group *iommu_group;
-
-	if (group->type == VFIO_NO_IOMMU || group->type == VFIO_EMULATED_IOMMU)
-		iommu_group_remove_device(device->dev);
-
-	/* Pairs with vfio_create_group() / vfio_group_get_from_iommu() */
-	if (!refcount_dec_and_mutex_lock(&group->drivers, &vfio.group_lock))
-		return;
-	list_del(&group->vfio_next);
-
-	/*
-	 * We could concurrently probe another driver in the group that might
-	 * race vfio_device_remove_group() with vfio_get_group(), so we have to
-	 * ensure that the sysfs is all cleaned up under lock otherwise the
-	 * cdev_device_add() will fail due to the name aready existing.
-	 */
-	cdev_device_del(&group->cdev, &group->dev);
-
-	mutex_lock(&group->group_lock);
-	/*
-	 * These data structures all have paired operations that can only be
-	 * undone when the caller holds a live reference on the device. Since
-	 * all pairs must be undone these WARN_ON's indicate some caller did not
-	 * properly hold the group reference.
-	 */
-	WARN_ON(!list_empty(&group->device_list));
-	WARN_ON(group->notifier.head);
-
-	/*
-	 * Revoke all users of group->iommu_group. At this point we know there
-	 * are no devices active because we are unplugging the last one. Setting
-	 * iommu_group to NULL blocks all new users.
-	 */
-	if (group->container)
-		vfio_group_detach_container(group);
-	iommu_group = group->iommu_group;
-	group->iommu_group = NULL;
-	mutex_unlock(&group->group_lock);
-	mutex_unlock(&vfio.group_lock);
-
-	iommu_group_put(iommu_group);
-	put_device(&group->dev);
-}
-
 /*
  * Device objects - create, release, get, put, search
  */
 /* Device reference always implies a group reference */
-static void vfio_device_put_registration(struct vfio_device *device)
+void vfio_device_put_registration(struct vfio_device *device)
 {
 	if (refcount_dec_and_test(&device->refcount))
 		complete(&device->comp);
 }
 
-static bool vfio_device_try_get_registration(struct vfio_device *device)
+bool vfio_device_try_get_registration(struct vfio_device *device)
 {
 	return refcount_inc_not_zero(&device->refcount);
 }
@@ -416,121 +259,6 @@ void vfio_free_device(struct vfio_device *device)
 }
 EXPORT_SYMBOL_GPL(vfio_free_device);
 
-static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
-		enum vfio_group_type type)
-{
-	struct iommu_group *iommu_group;
-	struct vfio_group *group;
-	int ret;
-
-	iommu_group = iommu_group_alloc();
-	if (IS_ERR(iommu_group))
-		return ERR_CAST(iommu_group);
-
-	ret = iommu_group_set_name(iommu_group, "vfio-noiommu");
-	if (ret)
-		goto out_put_group;
-	ret = iommu_group_add_device(iommu_group, dev);
-	if (ret)
-		goto out_put_group;
-
-	mutex_lock(&vfio.group_lock);
-	group = vfio_create_group(iommu_group, type);
-	mutex_unlock(&vfio.group_lock);
-	if (IS_ERR(group)) {
-		ret = PTR_ERR(group);
-		goto out_remove_device;
-	}
-	iommu_group_put(iommu_group);
-	return group;
-
-out_remove_device:
-	iommu_group_remove_device(dev);
-out_put_group:
-	iommu_group_put(iommu_group);
-	return ERR_PTR(ret);
-}
-
-static bool vfio_group_has_device(struct vfio_group *group, struct device *dev)
-{
-	struct vfio_device *device;
-
-	mutex_lock(&group->device_lock);
-	list_for_each_entry(device, &group->device_list, group_next) {
-		if (device->dev == dev) {
-			mutex_unlock(&group->device_lock);
-			return true;
-		}
-	}
-	mutex_unlock(&group->device_lock);
-	return false;
-}
-
-static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
-{
-	struct iommu_group *iommu_group;
-	struct vfio_group *group;
-
-	iommu_group = iommu_group_get(dev);
-	if (!iommu_group && vfio_noiommu) {
-		/*
-		 * With noiommu enabled, create an IOMMU group for devices that
-		 * don't already have one, implying no IOMMU hardware/driver
-		 * exists.  Taint the kernel because we're about to give a DMA
-		 * capable device to a user without IOMMU protection.
-		 */
-		group = vfio_noiommu_group_alloc(dev, VFIO_NO_IOMMU);
-		if (!IS_ERR(group)) {
-			add_taint(TAINT_USER, LOCKDEP_STILL_OK);
-			dev_warn(dev, "Adding kernel taint for vfio-noiommu group on device\n");
-		}
-		return group;
-	}
-
-	if (!iommu_group)
-		return ERR_PTR(-EINVAL);
-
-	/*
-	 * VFIO always sets IOMMU_CACHE because we offer no way for userspace to
-	 * restore cache coherency. It has to be checked here because it is only
-	 * valid for cases where we are using iommu groups.
-	 */
-	if (!device_iommu_capable(dev, IOMMU_CAP_CACHE_COHERENCY)) {
-		iommu_group_put(iommu_group);
-		return ERR_PTR(-EINVAL);
-	}
-
-	mutex_lock(&vfio.group_lock);
-	group = vfio_group_find_from_iommu(iommu_group);
-	if (group) {
-		if (WARN_ON(vfio_group_has_device(group, dev)))
-			group = ERR_PTR(-EINVAL);
-		else
-			refcount_inc(&group->drivers);
-	} else {
-		group = vfio_create_group(iommu_group, VFIO_IOMMU);
-	}
-	mutex_unlock(&vfio.group_lock);
-
-	/* The vfio_group holds a reference to the iommu_group */
-	iommu_group_put(iommu_group);
-	return group;
-}
-
-static void vfio_device_group_register(struct vfio_device *device)
-{
-	mutex_lock(&device->group->device_lock);
-	list_add(&device->group_next, &device->group->device_list);
-	mutex_unlock(&device->group->device_lock);
-}
-
-static void vfio_device_group_unregister(struct vfio_device *device)
-{
-	mutex_lock(&device->group->device_lock);
-	list_del(&device->group_next);
-	mutex_unlock(&device->group->device_lock);
-}
-
 static int __vfio_register_dev(struct vfio_device *device,
 		struct vfio_group *group)
 {
@@ -595,35 +323,6 @@ int vfio_register_emulated_iommu_dev(struct vfio_device *device)
 }
 EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev);
 
-static struct vfio_device *vfio_device_get_from_name(struct vfio_group *group,
-						     char *buf)
-{
-	struct vfio_device *it, *device = ERR_PTR(-ENODEV);
-
-	mutex_lock(&group->device_lock);
-	list_for_each_entry(it, &group->device_list, group_next) {
-		int ret;
-
-		if (it->ops->match) {
-			ret = it->ops->match(it, buf);
-			if (ret < 0) {
-				device = ERR_PTR(ret);
-				break;
-			}
-		} else {
-			ret = !strcmp(dev_name(it->dev), buf);
-		}
-
-		if (ret && vfio_device_try_get_registration(it)) {
-			device = it;
-			break;
-		}
-	}
-	mutex_unlock(&group->device_lock);
-
-	return device;
-}
-
 /*
  * Decrement the device reference count and wait for the device to be
  * removed.  Open file descriptors for the device... */
@@ -665,108 +364,6 @@ void vfio_unregister_group_dev(struct vfio_device *device)
 }
 EXPORT_SYMBOL_GPL(vfio_unregister_group_dev);
 
-/*
- * VFIO Group fd, /dev/vfio/$GROUP
- */
-static bool vfio_group_has_iommu(struct vfio_group *group)
-{
-	lockdep_assert_held(&group->group_lock);
-	/*
-	 * There can only be users if there is a container, and if there is a
-	 * container there must be users.
-	 */
-	WARN_ON(!group->container != !group->container_users);
-
-	return group->container || group->iommufd;
-}
-
-/*
- * VFIO_GROUP_UNSET_CONTAINER should fail if there are other users or
- * if there was no container to unset.  Since the ioctl is called on
- * the group, we know that still exists, therefore the only valid
- * transition here is 1->0.
- */
-static int vfio_group_ioctl_unset_container(struct vfio_group *group)
-{
-	int ret = 0;
-
-	mutex_lock(&group->group_lock);
-	if (!vfio_group_has_iommu(group)) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-	if (group->container) {
-		if (group->container_users != 1) {
-			ret = -EBUSY;
-			goto out_unlock;
-		}
-		vfio_group_detach_container(group);
-	}
-	if (group->iommufd) {
-		iommufd_ctx_put(group->iommufd);
-		group->iommufd = NULL;
-	}
-
-out_unlock:
-	mutex_unlock(&group->group_lock);
-	return ret;
-}
-
-static int vfio_group_ioctl_set_container(struct vfio_group *group,
-					  int __user *arg)
-{
-	struct vfio_container *container;
-	struct iommufd_ctx *iommufd;
-	struct fd f;
-	int ret;
-	int fd;
-
-	if (get_user(fd, arg))
-		return -EFAULT;
-
-	f = fdget(fd);
-	if (!f.file)
-		return -EBADF;
-
-	mutex_lock(&group->group_lock);
-	if (vfio_group_has_iommu(group)) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-	if (!group->iommu_group) {
-		ret = -ENODEV;
-		goto out_unlock;
-	}
-
-	container = vfio_container_from_file(f.file);
-	if (container) {
-		ret = vfio_container_attach_group(container, group);
-		goto out_unlock;
-	}
-
-	iommufd = iommufd_ctx_from_file(f.file);
-	if (!IS_ERR(iommufd)) {
-		u32 ioas_id;
-
-		ret = iommufd_vfio_compat_ioas_id(iommufd, &ioas_id);
-		if (ret) {
-			iommufd_ctx_put(group->iommufd);
-			goto out_unlock;
-		}
-
-		group->iommufd = iommufd;
-		goto out_unlock;
-	}
-
-	/* The FD passed is not recognized. */
-	ret = -EBADFD;
-
-out_unlock:
-	mutex_unlock(&group->group_lock);
-	fdput(f);
-	return ret;
-}
-
 static const struct file_operations vfio_device_fops;
 
 /* true if the vfio_device has open_device() called but not close_device() */
@@ -775,63 +372,6 @@ static bool vfio_assert_device_open(struct vfio_device *device)
 	return !WARN_ON_ONCE(!READ_ONCE(device->open_count));
 }
 
-static int vfio_device_group_use_iommu(struct vfio_device *device)
-{
-	int ret = 0;
-
-	/*
-	 * Here we pass the KVM pointer with the group under the lock.  If the
-	 * device driver will use it, it must obtain a reference and release it
-	 * during close_device.
-	 */
-	mutex_lock(&device->group->group_lock);
-	if (!vfio_group_has_iommu(device->group)) {
-		ret = -EINVAL;
-		goto out_unlock;
-	}
-
-	if (device->group->container) {
-		ret = vfio_group_use_container(device->group);
-		if (ret)
-			goto out_unlock;
-		vfio_device_container_register(device);
-	} else if (device->group->iommufd) {
-		ret = vfio_iommufd_bind(device, device->group->iommufd);
-	}
-
-out_unlock:
-	mutex_unlock(&device->group->group_lock);
-	return ret;
-}
-
-static void vfio_device_group_unuse_iommu(struct vfio_device *device)
-{
-	mutex_lock(&device->group->group_lock);
-	if (device->group->container) {
-		vfio_device_container_unregister(device);
-		vfio_group_unuse_container(device->group);
-	} else if (device->group->iommufd) {
-		vfio_iommufd_unbind(device);
-	}
-	mutex_unlock(&device->group->group_lock);
-}
-
-static struct kvm *vfio_group_get_kvm(struct vfio_group *group)
-{
-	mutex_lock(&group->group_lock);
-	if (!group->kvm) {
-		mutex_unlock(&group->group_lock);
-		return NULL;
-	}
-	/* group_lock is released in the vfio_group_put_kvm() */
-	return group->kvm;
-}
-
-static void vfio_group_put_kvm(struct vfio_group *group)
-{
-	mutex_unlock(&group->group_lock);
-}
-
 static int vfio_device_first_open(struct vfio_device *device)
 {
 	struct kvm *kvm;
@@ -908,7 +448,7 @@ static void vfio_device_close(struct vfio_device *device)
 	mutex_unlock(&device->dev_set->lock);
 }
 
-static struct file *vfio_device_open_file(struct vfio_device *device)
+struct file *vfio_device_open_file(struct vfio_device *device)
 {
 	struct file *filep;
 	int ret;
@@ -947,177 +487,6 @@ static struct file *vfio_device_open_file(struct vfio_device *device)
 	return ERR_PTR(ret);
 }
 
-static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
-					  char __user *arg)
-{
-	struct vfio_device *device;
-	struct file *filep;
-	char *buf;
-	int fdno;
-	int ret;
-
-	buf = strndup_user(arg, PAGE_SIZE);
-	if (IS_ERR(buf))
-		return PTR_ERR(buf);
-
-	device = vfio_device_get_from_name(group, buf);
-	kfree(buf);
-	if (IS_ERR(device))
-		return PTR_ERR(device);
-
-	fdno = get_unused_fd_flags(O_CLOEXEC);
-	if (fdno < 0) {
-		ret = fdno;
-		goto err_put_device;
-	}
-
-	filep = vfio_device_open_file(device);
-	if (IS_ERR(filep)) {
-		ret = PTR_ERR(filep);
-		goto err_put_fdno;
-	}
-
-	if (group->type == VFIO_NO_IOMMU)
-		dev_warn(device->dev, "vfio-noiommu device opened by user "
-			 "(%s:%d)\n", current->comm, task_pid_nr(current));
-
-	fd_install(fdno, filep);
-	return fdno;
-
-err_put_fdno:
-	put_unused_fd(fdno);
-err_put_device:
-	vfio_device_put_registration(device);
-	return ret;
-}
-
-static int vfio_group_ioctl_get_status(struct vfio_group *group,
-				       struct vfio_group_status __user *arg)
-{
-	unsigned long minsz = offsetofend(struct vfio_group_status, flags);
-	struct vfio_group_status status;
-
-	if (copy_from_user(&status, arg, minsz))
-		return -EFAULT;
-
-	if (status.argsz < minsz)
-		return -EINVAL;
-
-	status.flags = 0;
-
-	mutex_lock(&group->group_lock);
-	if (!group->iommu_group) {
-		mutex_unlock(&group->group_lock);
-		return -ENODEV;
-	}
-
-	/*
-	 * With the container FD the iommu_group_claim_dma_owner() is done
-	 * during SET_CONTAINER but for IOMMFD this is done during
-	 * VFIO_GROUP_GET_DEVICE_FD. Meaning that with iommufd
-	 * VFIO_GROUP_FLAGS_VIABLE could be set but GET_DEVICE_FD will fail due
-	 * to viability.
-	 */
-	if (vfio_group_has_iommu(group))
-		status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET |
-				VFIO_GROUP_FLAGS_VIABLE;
-	else if (!iommu_group_dma_owner_claimed(group->iommu_group))
-		status.flags |= VFIO_GROUP_FLAGS_VIABLE;
-	mutex_unlock(&group->group_lock);
-
-	if (copy_to_user(arg, &status, minsz))
-		return -EFAULT;
-	return 0;
-}
-
-static long vfio_group_fops_unl_ioctl(struct file *filep,
-				      unsigned int cmd, unsigned long arg)
-{
-	struct vfio_group *group = filep->private_data;
-	void __user *uarg = (void __user *)arg;
-
-	switch (cmd) {
-	case VFIO_GROUP_GET_DEVICE_FD:
-		return vfio_group_ioctl_get_device_fd(group, uarg);
-	case VFIO_GROUP_GET_STATUS:
-		return vfio_group_ioctl_get_status(group, uarg);
-	case VFIO_GROUP_SET_CONTAINER:
-		return vfio_group_ioctl_set_container(group, uarg);
-	case VFIO_GROUP_UNSET_CONTAINER:
-		return vfio_group_ioctl_unset_container(group);
-	default:
-		return -ENOTTY;
-	}
-}
-
-static int vfio_group_fops_open(struct inode *inode, struct file *filep)
-{
-	struct vfio_group *group =
-		container_of(inode->i_cdev, struct vfio_group, cdev);
-	int ret;
-
-	mutex_lock(&group->group_lock);
-
-	/*
-	 * drivers can be zero if this races with vfio_device_remove_group(), it
-	 * will be stable at 0 under the group rwsem
-	 */
-	if (refcount_read(&group->drivers) == 0) {
-		ret = -ENODEV;
-		goto out_unlock;
-	}
-
-	if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO)) {
-		ret = -EPERM;
-		goto out_unlock;
-	}
-
-	/*
-	 * Do we need multiple instances of the group open?  Seems not.
-	 */
-	if (group->opened_file) {
-		ret = -EBUSY;
-		goto out_unlock;
-	}
-	group->opened_file = filep;
-	filep->private_data = group;
-	ret = 0;
-out_unlock:
-	mutex_unlock(&group->group_lock);
-	return ret;
-}
-
-static int vfio_group_fops_release(struct inode *inode, struct file *filep)
-{
-	struct vfio_group *group = filep->private_data;
-
-	filep->private_data = NULL;
-
-	mutex_lock(&group->group_lock);
-	/*
-	 * Device FDs hold a group file reference, therefore the group release
-	 * is only called when there are no open devices.
-	 */
-	WARN_ON(group->notifier.head);
-	if (group->container)
-		vfio_group_detach_container(group);
-	if (group->iommufd) {
-		iommufd_ctx_put(group->iommufd);
-		group->iommufd = NULL;
-	}
-	group->opened_file = NULL;
-	mutex_unlock(&group->group_lock);
-	return 0;
-}
-
-static const struct file_operations vfio_group_fops = {
-	.owner		= THIS_MODULE,
-	.unlocked_ioctl	= vfio_group_fops_unl_ioctl,
-	.compat_ioctl	= compat_ptr_ioctl,
-	.open		= vfio_group_fops_open,
-	.release	= vfio_group_fops_release,
-};
-
 /*
  * Wrapper around pm_runtime_resume_and_get().
  * Return error code on failure or 0 on success.
@@ -1691,121 +1060,6 @@ static const struct file_operations vfio_device_fops = {
 	.mmap		= vfio_device_fops_mmap,
 };
 
-/**
- * vfio_file_iommu_group - Return the struct iommu_group for the vfio group file
- * @file: VFIO group file
- *
- * The returned iommu_group is valid as long as a ref is held on the file. This
- * returns a reference on the group. This function is deprecated, only the SPAPR
- * path in kvm should call it.
- */
-struct iommu_group *vfio_file_iommu_group(struct file *file)
-{
-	struct vfio_group *group = file->private_data;
-	struct iommu_group *iommu_group = NULL;
-
-	if (!IS_ENABLED(CONFIG_SPAPR_TCE_IOMMU))
-		return NULL;
-
-	if (!vfio_file_is_group(file))
-		return NULL;
-
-	mutex_lock(&group->group_lock);
-	if (group->iommu_group) {
-		iommu_group = group->iommu_group;
-		iommu_group_ref_get(iommu_group);
-	}
-	mutex_unlock(&group->group_lock);
-	return iommu_group;
-}
-EXPORT_SYMBOL_GPL(vfio_file_iommu_group);
-
-/**
- * vfio_file_is_group - True if the file is usable with VFIO aPIS
- * @file: VFIO group file
- */
-bool vfio_file_is_group(struct file *file)
-{
-	return file->f_op == &vfio_group_fops;
-}
-EXPORT_SYMBOL_GPL(vfio_file_is_group);
-
-/**
- * vfio_file_enforced_coherent - True if the DMA associated with the VFIO file
- *        is always CPU cache coherent
- * @file: VFIO group file
- *
- * Enforced coherency means that the IOMMU ignores things like the PCIe no-snoop
- * bit in DMA transactions. A return of false indicates that the user has
- * rights to access additional instructions such as wbinvd on x86.
- */
-bool vfio_file_enforced_coherent(struct file *file)
-{
-	struct vfio_group *group = file->private_data;
-	struct vfio_device *device;
-	bool ret = true;
-
-	if (!vfio_file_is_group(file))
-		return true;
-
-	/*
-	 * If the device does not have IOMMU_CAP_ENFORCE_CACHE_COHERENCY then
-	 * any domain later attached to it will also not support it. If the cap
-	 * is set then the iommu_domain eventually attached to the device/group
-	 * must use a domain with enforce_cache_coherency().
-	 */
-	mutex_lock(&group->device_lock);
-	list_for_each_entry(device, &group->device_list, group_next) {
-		if (!device_iommu_capable(device->dev,
-					  IOMMU_CAP_ENFORCE_CACHE_COHERENCY)) {
-			ret = false;
-			break;
-		}
-	}
-	mutex_unlock(&group->device_lock);
-	return ret;
-}
-EXPORT_SYMBOL_GPL(vfio_file_enforced_coherent);
-
-/**
- * vfio_file_set_kvm - Link a kvm with VFIO drivers
- * @file: VFIO group file
- * @kvm: KVM to link
- *
- * When a VFIO device is first opened the KVM will be available in
- * device->kvm if one was associated with the group.
- */
-void vfio_file_set_kvm(struct file *file, struct kvm *kvm)
-{
-	struct vfio_group *group = file->private_data;
-
-	if (!vfio_file_is_group(file))
-		return;
-
-	mutex_lock(&group->group_lock);
-	group->kvm = kvm;
-	mutex_unlock(&group->group_lock);
-}
-EXPORT_SYMBOL_GPL(vfio_file_set_kvm);
-
-/**
- * vfio_file_has_dev - True if the VFIO file is a handle for device
- * @file: VFIO file to check
- * @device: Device that must be part of the file
- *
- * Returns true if given file has permission to manipulate the given device.
- */
-bool vfio_file_has_dev(struct file *file, struct vfio_device *device)
-{
-	struct vfio_group *group = file->private_data;
-
-	if (!vfio_file_is_group(file))
-		return false;
-
-	return group == device->group;
-}
-EXPORT_SYMBOL_GPL(vfio_file_has_dev);
-
 /*
  * Sub-module support
  */
@@ -1925,11 +1179,6 @@ int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr, int num_irqs,
 }
 EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
 
-static bool vfio_group_has_container(struct vfio_group *group)
-{
-	return group->container;
-}
-
 /*
  * Pin contiguous user pages and return their associated host pages for local
  * domain only.
@@ -2052,55 +1301,6 @@ EXPORT_SYMBOL(vfio_dma_rw);
 /*
  * Module/class support
  */
-static char *vfio_devnode(struct device *dev, umode_t *mode)
-{
-	return kasprintf(GFP_KERNEL, "vfio/%s", dev_name(dev));
-}
-
-static int __init vfio_group_init(void)
-{
-	int ret;
-
-	ida_init(&vfio.group_ida);
-	mutex_init(&vfio.group_lock);
-	INIT_LIST_HEAD(&vfio.group_list);
-
-	ret = vfio_container_init();
-	if (ret)
-		return ret;
-
-	/* /dev/vfio/$GROUP */
-	vfio.class = class_create(THIS_MODULE, "vfio");
-	if (IS_ERR(vfio.class)) {
-		ret = PTR_ERR(vfio.class);
-		goto err_group_class;
-	}
-
-	vfio.class->devnode = vfio_devnode;
-
-	ret = alloc_chrdev_region(&vfio.group_devt, 0, MINORMASK + 1, "vfio");
-	if (ret)
-		goto err_alloc_chrdev;
-	return 0;
-
-err_alloc_chrdev:
-	class_destroy(vfio.class);
-	vfio.class = NULL;
-err_group_class:
-	vfio_container_cleanup();
-	return ret;
-}
-
-static void vfio_group_cleanup(void)
-{
-	WARN_ON(!list_empty(&vfio.group_list));
-	ida_destroy(&vfio.group_ida);
-	unregister_chrdev_region(vfio.group_devt, MINORMASK + 1);
-	class_destroy(vfio.class);
-	vfio.class = NULL;
-	vfio_container_cleanup();
-}
-
 static int __init vfio_init(void)
 {
 	int ret;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 09/10] vfio: Refactor dma APIs for emulated devices
  2022-11-23 15:01 ` [RFC 09/10] vfio: Refactor dma APIs for emulated devices Yi Liu
@ 2022-11-23 16:51   ` Jason Gunthorpe
  2022-11-24  3:05     ` Yi Liu
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2022-11-23 16:51 UTC (permalink / raw)
  To: Yi Liu
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On Wed, Nov 23, 2022 at 07:01:12AM -0800, Yi Liu wrote:
> To use group helpers instead of opening group related code in the
> API. This prepares moving group specific code out of vfio_main.c.
> 
> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
>  drivers/vfio/container.c | 20 +++++++++++++-------
>  drivers/vfio/vfio.h      | 32 ++++++++++++++++----------------
>  drivers/vfio/vfio_main.c | 26 +++++++++++++++-----------
>  3 files changed, 44 insertions(+), 34 deletions(-)
> 
> diff --git a/drivers/vfio/container.c b/drivers/vfio/container.c
> index 6b362d97d682..e0d11ab7229a 100644
> --- a/drivers/vfio/container.c
> +++ b/drivers/vfio/container.c
> @@ -540,11 +540,13 @@ void vfio_group_unuse_container(struct vfio_group *group)
>  	fput(group->opened_file);
>  }
>  
> -int vfio_container_pin_pages(struct vfio_container *container,
> -			     struct iommu_group *iommu_group, dma_addr_t iova,
> -			     int npage, int prot, struct page **pages)
> +int vfio_group_container_pin_pages(struct vfio_group *group,
> +				   dma_addr_t iova, int npage,
> +				   int prot, struct page **pages)
>  {
> +	struct vfio_container *container = group->container;
>  	struct vfio_iommu_driver *driver = container->iommu_driver;
> +	struct iommu_group *iommu_group = group->iommu_group;
>  
>  	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>  		return -E2BIG;
> @@ -555,9 +557,11 @@ int vfio_container_pin_pages(struct vfio_container *container,
>  				      npage, prot, pages);
>  }
>  
> -void vfio_container_unpin_pages(struct vfio_container *container,
> -				dma_addr_t iova, int npage)
> +void vfio_group_container_unpin_pages(struct vfio_group *group,
> +				      dma_addr_t iova, int npage)
>  {
> +	struct vfio_container *container = group->container;
> +
>  	if (WARN_ON(npage <= 0 || npage > VFIO_PIN_PAGES_MAX_ENTRIES))
>  		return;
>  
> @@ -565,9 +569,11 @@ void vfio_container_unpin_pages(struct vfio_container *container,
>  						  npage);
>  }
>  
> -int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
> -			  void *data, size_t len, bool write)
> +int vfio_group_container_dma_rw(struct vfio_group *group,
> +				dma_addr_t iova, void *data,
> +				size_t len, bool write)
>  {
> +	struct vfio_container *container = group->container;
>  	struct vfio_iommu_driver *driver = container->iommu_driver;
>  
>  	if (unlikely(!driver || !driver->ops->dma_rw))
> diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
> index 3378714a7462..d6b6bc20406b 100644
> --- a/drivers/vfio/vfio.h
> +++ b/drivers/vfio/vfio.h
> @@ -122,13 +122,14 @@ int vfio_container_attach_group(struct vfio_container *container,
>  void vfio_group_detach_container(struct vfio_group *group);
>  void vfio_device_container_register(struct vfio_device *device);
>  void vfio_device_container_unregister(struct vfio_device *device);
> -int vfio_container_pin_pages(struct vfio_container *container,
> -			     struct iommu_group *iommu_group, dma_addr_t iova,
> -			     int npage, int prot, struct page **pages);
> -void vfio_container_unpin_pages(struct vfio_container *container,
> -				dma_addr_t iova, int npage);
> -int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
> -			  void *data, size_t len, bool write);
> +int vfio_group_container_pin_pages(struct vfio_group *group,
> +				   dma_addr_t iova, int npage,
> +				   int prot, struct page **pages);
> +void vfio_group_container_unpin_pages(struct vfio_group *group,
> +				      dma_addr_t iova, int npage);
> +int vfio_group_container_dma_rw(struct vfio_group *group,
> +				dma_addr_t iova, void *data,
> +				size_t len, bool write);
>  
>  int __init vfio_container_init(void);
>  void vfio_container_cleanup(void);
> @@ -166,22 +167,21 @@ static inline void vfio_device_container_unregister(struct vfio_device *device)
>  {
>  }
>  
> -static inline int vfio_container_pin_pages(struct vfio_container *container,
> -					   struct iommu_group *iommu_group,
> -					   dma_addr_t iova, int npage, int prot,
> -					   struct page **pages)
> +static inline int vfio_group_container_pin_pages(struct vfio_group *group,
> +						 dma_addr_t iova, int npage,
> +						 int prot, struct page **pages)
>  {
>  	return -EOPNOTSUPP;
>  }
>  
> -static inline void vfio_container_unpin_pages(struct vfio_container *container,
> -					      dma_addr_t iova, int npage)
> +static inline void vfio_group_container_unpin_pages(struct vfio_group *group,
> +						    dma_addr_t iova, int npage)
>  {
>  }
>  
> -static inline int vfio_container_dma_rw(struct vfio_container *container,
> -					dma_addr_t iova, void *data, size_t len,
> -					bool write)
> +static inline int vfio_group_container_dma_rw(struct vfio_group *group,
> +					      dma_addr_t iova, void *data,
> +					      size_t len, bool write)
>  {
>  	return -EOPNOTSUPP;
>  }
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index cde258f4ea17..b6d3cb35a523 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -1925,6 +1925,11 @@ int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr, int num_irqs,
>  }
>  EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>  
> +static bool vfio_group_has_container(struct vfio_group *group)
> +{
> +	return group->container;
> +}

This should probably be
 
  vfio_device_has_container(struct vfio_device  *device)

And it just returns false if the group code is compiled out

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 10/10] vfio: Move vfio group specific code into group.c
  2022-11-23 15:01 ` [RFC 10/10] vfio: Move vfio group specific code into group.c Yi Liu
@ 2022-11-23 18:37   ` Jason Gunthorpe
  2022-11-24  3:06     ` Yi Liu
  0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2022-11-23 18:37 UTC (permalink / raw)
  To: Yi Liu
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On Wed, Nov 23, 2022 at 07:01:13AM -0800, Yi Liu wrote:

> +void vfio_device_group_abort_open(struct vfio_device *device)
> +{
> +	mutex_lock(&device->group->group_lock);
> +	if (device->group->container)
> +		vfio_device_container_unregister(device);
> +	mutex_unlock(&device->group->group_lock);
> +}

I'm looking at your git branch and I don't see this called?

drivers/vfio/group.c:void vfio_device_group_abort_open(struct vfio_device *device)
drivers/vfio/vfio.h:void vfio_device_group_abort_open(struct vfio_device *device);

?

Jason

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 00/10]  Move group specific code into group.c
  2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
                   ` (9 preceding siblings ...)
  2022-11-23 15:01 ` [RFC 10/10] vfio: Move vfio group specific code into group.c Yi Liu
@ 2022-11-23 18:41 ` Jason Gunthorpe
  2022-11-24  3:15   ` Yi Liu
  10 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2022-11-23 18:41 UTC (permalink / raw)
  To: Yi Liu
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On Wed, Nov 23, 2022 at 07:01:03AM -0800, Yi Liu wrote:
> With the introduction of iommufd[1], VFIO is towarding to provide device
> centric uAPI after adapting to iommufd. With this trend, existing VFIO
> group infrastructure is optional once VFIO converted to device centric.
> 
> This series moves the group specific code out of vfio_main.c, prepares
> for compiling group infrastructure out after adding vfio device cdev[2]
> 
> Complete code in below branch:
> 
> https://github.com/yiliu1765/iommufd/commits/vfio_group_split_rfcv1
> 
> This is based on Jason's "Connect VFIO to IOMMUFD"[3] and my "Make mdev driver
> dma_unmap callback tolerant to unmaps come before device open"[4]
> 
> [1] https://lore.kernel.org/all/0-v5-4001c2997bd0+30c-iommufd_jgg@nvidia.com/
> [2] https://github.com/yiliu1765/iommufd/tree/wip/vfio_device_cdev
> [3] https://lore.kernel.org/kvm/063990c3-c244-1f7f-4e01-348023832066@intel.com/T/#t
> [4] https://lore.kernel.org/kvm/20221123134832.429589-1-yi.l.liu@intel.com/T/#t

I looked at this for a while, I think you should squish the below into
the series too.

A good goal is to make it so we can compile out vfio_device::group
entirely when group.c is disabled. This makes the compile time
checking stronger (adjust the cdev patch to do this). It means
removing all device->group references from vfio_main.c, which the
below does:

diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
index d8ef098c1f74a6..3a69839c65ff75 100644
--- a/drivers/vfio/group.c
+++ b/drivers/vfio/group.c
@@ -476,8 +476,8 @@ void vfio_device_remove_group(struct vfio_device *device)
 	put_device(&group->dev);
 }
 
-struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
-					    enum vfio_group_type type)
+static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
+						   enum vfio_group_type type)
 {
 	struct iommu_group *iommu_group;
 	struct vfio_group *group;
@@ -526,7 +526,7 @@ static bool vfio_group_has_device(struct vfio_group *group, struct device *dev)
 	return false;
 }
 
-struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
+static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 {
 	struct iommu_group *iommu_group;
 	struct vfio_group *group;
@@ -577,6 +577,22 @@ struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
 	return group;
 }
 
+int vfio_device_set_group(struct vfio_device *device, enum vfio_group_type type)
+{
+	struct vfio_group *group;
+
+	if (type == VFIO_IOMMU)
+		group = vfio_group_find_or_alloc(device->dev);
+	else
+		group = vfio_noiommu_group_alloc(device->dev, type);
+	if (IS_ERR(group))
+		return PTR_ERR(group);
+
+	/* Our reference on group is moved to the device */
+	device->group = group;
+	return 0;
+}
+
 void vfio_device_group_register(struct vfio_device *device)
 {
 	mutex_lock(&device->group->device_lock);
@@ -632,8 +648,10 @@ void vfio_device_group_unuse_iommu(struct vfio_device *device)
 	mutex_unlock(&device->group->group_lock);
 }
 
-struct kvm *vfio_group_get_kvm(struct vfio_group *group)
+struct kvm *vfio_device_get_group_kvm(struct vfio_device *device)
 {
+	struct vfio_group *group = device->group;
+
 	mutex_lock(&group->group_lock);
 	if (!group->kvm) {
 		mutex_unlock(&group->group_lock);
@@ -643,24 +661,8 @@ struct kvm *vfio_group_get_kvm(struct vfio_group *group)
 	return group->kvm;
 }
 
-void vfio_group_put_kvm(struct vfio_group *group)
-{
-	mutex_unlock(&group->group_lock);
-}
-
-void vfio_device_group_finalize_open(struct vfio_device *device)
+void vfio_device_put_group_kvm(struct vfio_device *device)
 {
-	mutex_lock(&device->group->group_lock);
-	if (device->group->container)
-		vfio_device_container_register(device);
-	mutex_unlock(&device->group->group_lock);
-}
-
-void vfio_device_group_abort_open(struct vfio_device *device)
-{
-	mutex_lock(&device->group->group_lock);
-	if (device->group->container)
-		vfio_device_container_unregister(device);
 	mutex_unlock(&device->group->group_lock);
 }
 
@@ -779,9 +781,9 @@ bool vfio_file_has_dev(struct file *file, struct vfio_device *device)
 }
 EXPORT_SYMBOL_GPL(vfio_file_has_dev);
 
-bool vfio_group_has_container(struct vfio_group *group)
+bool vfio_device_has_container(struct vfio_device *device)
 {
-	return group->container;
+	return device->group->container;
 }
 
 static char *vfio_devnode(struct device *dev, umode_t *mode)
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index 670c9c5a55f1fc..e69bfcefee400e 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -70,19 +70,16 @@ struct vfio_group {
 	struct iommufd_ctx		*iommufd;
 };
 
+int vfio_device_set_group(struct vfio_device *device,
+			  enum vfio_group_type type);
 void vfio_device_remove_group(struct vfio_device *device);
-struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
-					    enum vfio_group_type type);
-struct vfio_group *vfio_group_find_or_alloc(struct device *dev);
 void vfio_device_group_register(struct vfio_device *device);
 void vfio_device_group_unregister(struct vfio_device *device);
 int vfio_device_group_use_iommu(struct vfio_device *device);
 void vfio_device_group_unuse_iommu(struct vfio_device *device);
-struct kvm *vfio_group_get_kvm(struct vfio_group *group);
-void vfio_group_put_kvm(struct vfio_group *group);
-void vfio_device_group_finalize_open(struct vfio_device *device);
-void vfio_device_group_abort_open(struct vfio_device *device);
-bool vfio_group_has_container(struct vfio_group *group);
+struct kvm *vfio_device_get_group_kvm(struct vfio_device *device);
+void vfio_device_put_group_kvm(struct vfio_device *device);
+bool vfio_device_has_container(struct vfio_device *device);
 int __init vfio_group_init(void);
 void vfio_group_cleanup(void);
 
@@ -142,12 +139,12 @@ int vfio_container_attach_group(struct vfio_container *container,
 void vfio_group_detach_container(struct vfio_group *group);
 void vfio_device_container_register(struct vfio_device *device);
 void vfio_device_container_unregister(struct vfio_device *device);
-int vfio_group_container_pin_pages(struct vfio_group *group,
+int vfio_device_container_pin_pages(struct vfio_device *device,
 				   dma_addr_t iova, int npage,
 				   int prot, struct page **pages);
-void vfio_group_container_unpin_pages(struct vfio_group *group,
+void vfio_device_container_unpin_pages(struct vfio_device *device,
 				      dma_addr_t iova, int npage);
-int vfio_group_container_dma_rw(struct vfio_group *group,
+int vfio_device_container_dma_rw(struct vfio_device *device,
 				dma_addr_t iova, void *data,
 				size_t len, bool write);
 
@@ -187,21 +184,21 @@ static inline void vfio_device_container_unregister(struct vfio_device *device)
 {
 }
 
-static inline int vfio_group_container_pin_pages(struct vfio_group *group,
-						 dma_addr_t iova, int npage,
-						 int prot, struct page **pages)
+static inline int vfio_device_container_pin_pages(struct vfio_device *device,
+						  dma_addr_t iova, int npage,
+						  int prot, struct page **pages)
 {
 	return -EOPNOTSUPP;
 }
 
-static inline void vfio_group_container_unpin_pages(struct vfio_group *group,
-						    dma_addr_t iova, int npage)
+static inline void vfio_device_container_unpin_pages(struct vfio_device *device,
+						     dma_addr_t iova, int npage)
 {
 }
 
-static inline int vfio_group_container_dma_rw(struct vfio_group *group,
-					      dma_addr_t iova, void *data,
-					      size_t len, bool write)
+static inline int vfio_device_container_dma_rw(struct vfio_device *device,
+					       dma_addr_t iova, void *data,
+					       size_t len, bool write)
 {
 	return -EOPNOTSUPP;
 }
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index a7b966b4f3fc86..3108e92a5cb20b 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -260,17 +260,10 @@ void vfio_free_device(struct vfio_device *device)
 EXPORT_SYMBOL_GPL(vfio_free_device);
 
 static int __vfio_register_dev(struct vfio_device *device,
-		struct vfio_group *group)
+			       enum vfio_group_type type)
 {
 	int ret;
 
-	/*
-	 * In all cases group is the output of one of the group allocation
-	 * functions and we have group->drivers incremented for us.
-	 */
-	if (IS_ERR(group))
-		return PTR_ERR(group);
-
 	if (WARN_ON(device->ops->bind_iommufd &&
 		    (!device->ops->unbind_iommufd ||
 		     !device->ops->attach_ioas)))
@@ -283,16 +276,19 @@ static int __vfio_register_dev(struct vfio_device *device,
 	if (!device->dev_set)
 		vfio_assign_device_set(device, device);
 
-	/* Our reference on group is moved to the device */
-	device->group = group;
-
 	ret = dev_set_name(&device->device, "vfio%d", device->index);
 	if (ret)
-		goto err_out;
+		return ret;
 
-	ret = device_add(&device->device);
+	ret = vfio_device_set_group(device, type);
 	if (ret)
-		goto err_out;
+		return ret;
+
+	ret = device_add(&device->device);
+	if (ret) {
+		vfio_device_remove_group(device);
+		return ret;
+	}
 
 	/* Refcounting can't start until the driver calls register */
 	refcount_set(&device->refcount, 1);
@@ -300,15 +296,12 @@ static int __vfio_register_dev(struct vfio_device *device,
 	vfio_device_group_register(device);
 
 	return 0;
-err_out:
-	vfio_device_remove_group(device);
-	return ret;
 }
 
 int vfio_register_group_dev(struct vfio_device *device)
 {
-	return __vfio_register_dev(device,
-		vfio_group_find_or_alloc(device->dev));
+	return __vfio_register_dev(device, VFIO_IOMMU);
+
 }
 EXPORT_SYMBOL_GPL(vfio_register_group_dev);
 
@@ -318,8 +311,7 @@ EXPORT_SYMBOL_GPL(vfio_register_group_dev);
  */
 int vfio_register_emulated_iommu_dev(struct vfio_device *device)
 {
-	return __vfio_register_dev(device,
-		vfio_noiommu_group_alloc(device->dev, VFIO_EMULATED_IOMMU));
+	return __vfio_register_dev(device, VFIO_EMULATED_IOMMU);
 }
 EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev);
 
@@ -386,7 +378,7 @@ static int vfio_device_first_open(struct vfio_device *device)
 	if (ret)
 		goto err_module_put;
 
-	kvm = vfio_group_get_kvm(device->group);
+	kvm = vfio_device_get_group_kvm(device);
 	if (!kvm) {
 		ret = -EINVAL;
 		goto err_unuse_iommu;
@@ -398,12 +390,12 @@ static int vfio_device_first_open(struct vfio_device *device)
 		if (ret)
 			goto err_container;
 	}
-	vfio_group_put_kvm(device->group);
+	vfio_device_put_group_kvm(device);
 	return 0;
 
 err_container:
 	device->kvm = NULL;
-	vfio_group_put_kvm(device->group);
+	vfio_device_put_group_kvm(device);
 err_unuse_iommu:
 	vfio_device_group_unuse_iommu(device);
 err_module_put:
@@ -1199,8 +1191,8 @@ int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
 	/* group->container cannot change while a vfio device is open */
 	if (!pages || !npage || WARN_ON(!vfio_assert_device_open(device)))
 		return -EINVAL;
-	if (vfio_group_has_container(device->group))
-		return vfio_group_container_pin_pages(device->group, iova,
+	if (vfio_device_has_container(device))
+		return vfio_device_container_pin_pages(device, iova,
 						      npage, prot, pages);
 	if (device->iommufd_access) {
 		int ret;
@@ -1237,8 +1229,8 @@ void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova, int npage)
 	if (WARN_ON(!vfio_assert_device_open(device)))
 		return;
 
-	if (vfio_group_has_container(device->group)) {
-		vfio_group_container_unpin_pages(device->group, iova,
+	if (vfio_device_has_container(device)) {
+		vfio_device_container_unpin_pages(device, iova,
 						 npage);
 		return;
 	}
@@ -1276,9 +1268,9 @@ int vfio_dma_rw(struct vfio_device *device, dma_addr_t iova, void *data,
 	if (!data || len <= 0 || !vfio_assert_device_open(device))
 		return -EINVAL;
 
-	if (vfio_group_has_container(device->group))
-		return vfio_group_container_dma_rw(device->group, iova,
-						   data, len, write);
+	if (vfio_device_has_container(device))
+		return vfio_device_container_dma_rw(device, iova, data, len,
+						    write);
 
 	if (device->iommufd_access) {
 		unsigned int flags = 0;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC 09/10] vfio: Refactor dma APIs for emulated devices
  2022-11-23 16:51   ` Jason Gunthorpe
@ 2022-11-24  3:05     ` Yi Liu
  0 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-24  3:05 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On 2022/11/24 00:51, Jason Gunthorpe wrote:
> On Wed, Nov 23, 2022 at 07:01:12AM -0800, Yi Liu wrote:
>> To use group helpers instead of opening group related code in the
>> API. This prepares moving group specific code out of vfio_main.c.
>>
>> Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
>> ---
>>   drivers/vfio/container.c | 20 +++++++++++++-------
>>   drivers/vfio/vfio.h      | 32 ++++++++++++++++----------------
>>   drivers/vfio/vfio_main.c | 26 +++++++++++++++-----------
>>   3 files changed, 44 insertions(+), 34 deletions(-)
>>
>> diff --git a/drivers/vfio/container.c b/drivers/vfio/container.c
>> index 6b362d97d682..e0d11ab7229a 100644
>> --- a/drivers/vfio/container.c
>> +++ b/drivers/vfio/container.c
>> @@ -540,11 +540,13 @@ void vfio_group_unuse_container(struct vfio_group *group)
>>   	fput(group->opened_file);
>>   }
>>   
>> -int vfio_container_pin_pages(struct vfio_container *container,
>> -			     struct iommu_group *iommu_group, dma_addr_t iova,
>> -			     int npage, int prot, struct page **pages)
>> +int vfio_group_container_pin_pages(struct vfio_group *group,
>> +				   dma_addr_t iova, int npage,
>> +				   int prot, struct page **pages)
>>   {
>> +	struct vfio_container *container = group->container;
>>   	struct vfio_iommu_driver *driver = container->iommu_driver;
>> +	struct iommu_group *iommu_group = group->iommu_group;
>>   
>>   	if (npage > VFIO_PIN_PAGES_MAX_ENTRIES)
>>   		return -E2BIG;
>> @@ -555,9 +557,11 @@ int vfio_container_pin_pages(struct vfio_container *container,
>>   				      npage, prot, pages);
>>   }
>>   
>> -void vfio_container_unpin_pages(struct vfio_container *container,
>> -				dma_addr_t iova, int npage)
>> +void vfio_group_container_unpin_pages(struct vfio_group *group,
>> +				      dma_addr_t iova, int npage)
>>   {
>> +	struct vfio_container *container = group->container;
>> +
>>   	if (WARN_ON(npage <= 0 || npage > VFIO_PIN_PAGES_MAX_ENTRIES))
>>   		return;
>>   
>> @@ -565,9 +569,11 @@ void vfio_container_unpin_pages(struct vfio_container *container,
>>   						  npage);
>>   }
>>   
>> -int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
>> -			  void *data, size_t len, bool write)
>> +int vfio_group_container_dma_rw(struct vfio_group *group,
>> +				dma_addr_t iova, void *data,
>> +				size_t len, bool write)
>>   {
>> +	struct vfio_container *container = group->container;
>>   	struct vfio_iommu_driver *driver = container->iommu_driver;
>>   
>>   	if (unlikely(!driver || !driver->ops->dma_rw))
>> diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
>> index 3378714a7462..d6b6bc20406b 100644
>> --- a/drivers/vfio/vfio.h
>> +++ b/drivers/vfio/vfio.h
>> @@ -122,13 +122,14 @@ int vfio_container_attach_group(struct vfio_container *container,
>>   void vfio_group_detach_container(struct vfio_group *group);
>>   void vfio_device_container_register(struct vfio_device *device);
>>   void vfio_device_container_unregister(struct vfio_device *device);
>> -int vfio_container_pin_pages(struct vfio_container *container,
>> -			     struct iommu_group *iommu_group, dma_addr_t iova,
>> -			     int npage, int prot, struct page **pages);
>> -void vfio_container_unpin_pages(struct vfio_container *container,
>> -				dma_addr_t iova, int npage);
>> -int vfio_container_dma_rw(struct vfio_container *container, dma_addr_t iova,
>> -			  void *data, size_t len, bool write);
>> +int vfio_group_container_pin_pages(struct vfio_group *group,
>> +				   dma_addr_t iova, int npage,
>> +				   int prot, struct page **pages);
>> +void vfio_group_container_unpin_pages(struct vfio_group *group,
>> +				      dma_addr_t iova, int npage);
>> +int vfio_group_container_dma_rw(struct vfio_group *group,
>> +				dma_addr_t iova, void *data,
>> +				size_t len, bool write);
>>   
>>   int __init vfio_container_init(void);
>>   void vfio_container_cleanup(void);
>> @@ -166,22 +167,21 @@ static inline void vfio_device_container_unregister(struct vfio_device *device)
>>   {
>>   }
>>   
>> -static inline int vfio_container_pin_pages(struct vfio_container *container,
>> -					   struct iommu_group *iommu_group,
>> -					   dma_addr_t iova, int npage, int prot,
>> -					   struct page **pages)
>> +static inline int vfio_group_container_pin_pages(struct vfio_group *group,
>> +						 dma_addr_t iova, int npage,
>> +						 int prot, struct page **pages)
>>   {
>>   	return -EOPNOTSUPP;
>>   }
>>   
>> -static inline void vfio_container_unpin_pages(struct vfio_container *container,
>> -					      dma_addr_t iova, int npage)
>> +static inline void vfio_group_container_unpin_pages(struct vfio_group *group,
>> +						    dma_addr_t iova, int npage)
>>   {
>>   }
>>   
>> -static inline int vfio_container_dma_rw(struct vfio_container *container,
>> -					dma_addr_t iova, void *data, size_t len,
>> -					bool write)
>> +static inline int vfio_group_container_dma_rw(struct vfio_group *group,
>> +					      dma_addr_t iova, void *data,
>> +					      size_t len, bool write)
>>   {
>>   	return -EOPNOTSUPP;
>>   }
>> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
>> index cde258f4ea17..b6d3cb35a523 100644
>> --- a/drivers/vfio/vfio_main.c
>> +++ b/drivers/vfio/vfio_main.c
>> @@ -1925,6 +1925,11 @@ int vfio_set_irqs_validate_and_prepare(struct vfio_irq_set *hdr, int num_irqs,
>>   }
>>   EXPORT_SYMBOL(vfio_set_irqs_validate_and_prepare);
>>   
>> +static bool vfio_group_has_container(struct vfio_group *group)
>> +{
>> +	return group->container;
>> +}
> 
> This should probably be
>   
>    vfio_device_has_container(struct vfio_device  *device)
> 
> And it just returns false if the group code is compiled out

sure.

-- 
Regards,
Yi Liu

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 10/10] vfio: Move vfio group specific code into group.c
  2022-11-23 18:37   ` Jason Gunthorpe
@ 2022-11-24  3:06     ` Yi Liu
  0 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-24  3:06 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On 2022/11/24 02:37, Jason Gunthorpe wrote:
> On Wed, Nov 23, 2022 at 07:01:13AM -0800, Yi Liu wrote:
> 
>> +void vfio_device_group_abort_open(struct vfio_device *device)
>> +{
>> +	mutex_lock(&device->group->group_lock);
>> +	if (device->group->container)
>> +		vfio_device_container_unregister(device);
>> +	mutex_unlock(&device->group->group_lock);
>> +}
> 
> I'm looking at your git branch and I don't see this called?
> 
> drivers/vfio/group.c:void vfio_device_group_abort_open(struct vfio_device *device)
> drivers/vfio/vfio.h:void vfio_device_group_abort_open(struct vfio_device *device);
> 

sorry, a rebase mistake, they should be removed. thanks for spotting it.

-- 
Regards,
Yi Liu

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC 00/10] Move group specific code into group.c
  2022-11-23 18:41 ` [RFC 00/10] Move " Jason Gunthorpe
@ 2022-11-24  3:15   ` Yi Liu
  0 siblings, 0 replies; 17+ messages in thread
From: Yi Liu @ 2022-11-24  3:15 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: alex.williamson, kevin.tian, eric.auger, cohuck, nicolinc,
	yi.y.sun, chao.p.peng, mjrosato, kvm

On 2022/11/24 02:41, Jason Gunthorpe wrote:
> On Wed, Nov 23, 2022 at 07:01:03AM -0800, Yi Liu wrote:
>> With the introduction of iommufd[1], VFIO is towarding to provide device
>> centric uAPI after adapting to iommufd. With this trend, existing VFIO
>> group infrastructure is optional once VFIO converted to device centric.
>>
>> This series moves the group specific code out of vfio_main.c, prepares
>> for compiling group infrastructure out after adding vfio device cdev[2]
>>
>> Complete code in below branch:
>>
>> https://github.com/yiliu1765/iommufd/commits/vfio_group_split_rfcv1
>>
>> This is based on Jason's "Connect VFIO to IOMMUFD"[3] and my "Make mdev driver
>> dma_unmap callback tolerant to unmaps come before device open"[4]
>>
>> [1] https://lore.kernel.org/all/0-v5-4001c2997bd0+30c-iommufd_jgg@nvidia.com/
>> [2] https://github.com/yiliu1765/iommufd/tree/wip/vfio_device_cdev
>> [3] https://lore.kernel.org/kvm/063990c3-c244-1f7f-4e01-348023832066@intel.com/T/#t
>> [4] https://lore.kernel.org/kvm/20221123134832.429589-1-yi.l.liu@intel.com/T/#t
> 
> I looked at this for a while, I think you should squish the below into
> the series too.
> 
> A good goal is to make it so we can compile out vfio_device::group
> entirely when group.c is disabled. This makes the compile time
> checking stronger (adjust the cdev patch to do this). It means
> removing all device->group references from vfio_main.c, which the
> below does:

sure, I'll refine the series with below hints. btw. If no device->group
reference in vfio_main.c, then we may also make struct vfio_device::group
defined optionally in header file. This may be done in later cdev series.

> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
> index d8ef098c1f74a6..3a69839c65ff75 100644
> --- a/drivers/vfio/group.c
> +++ b/drivers/vfio/group.c
> @@ -476,8 +476,8 @@ void vfio_device_remove_group(struct vfio_device *device)
>   	put_device(&group->dev);
>   }
>   
> -struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
> -					    enum vfio_group_type type)
> +static struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
> +						   enum vfio_group_type type)
>   {
>   	struct iommu_group *iommu_group;
>   	struct vfio_group *group;
> @@ -526,7 +526,7 @@ static bool vfio_group_has_device(struct vfio_group *group, struct device *dev)
>   	return false;
>   }
>   
> -struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
> +static struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
>   {
>   	struct iommu_group *iommu_group;
>   	struct vfio_group *group;
> @@ -577,6 +577,22 @@ struct vfio_group *vfio_group_find_or_alloc(struct device *dev)
>   	return group;
>   }
>   
> +int vfio_device_set_group(struct vfio_device *device, enum vfio_group_type type)
> +{
> +	struct vfio_group *group;
> +
> +	if (type == VFIO_IOMMU)
> +		group = vfio_group_find_or_alloc(device->dev);
> +	else
> +		group = vfio_noiommu_group_alloc(device->dev, type);
> +	if (IS_ERR(group))
> +		return PTR_ERR(group);
> +
> +	/* Our reference on group is moved to the device */
> +	device->group = group;
> +	return 0;
> +}
> +
>   void vfio_device_group_register(struct vfio_device *device)
>   {
>   	mutex_lock(&device->group->device_lock);
> @@ -632,8 +648,10 @@ void vfio_device_group_unuse_iommu(struct vfio_device *device)
>   	mutex_unlock(&device->group->group_lock);
>   }
>   
> -struct kvm *vfio_group_get_kvm(struct vfio_group *group)
> +struct kvm *vfio_device_get_group_kvm(struct vfio_device *device)
>   {
> +	struct vfio_group *group = device->group;
> +
>   	mutex_lock(&group->group_lock);
>   	if (!group->kvm) {
>   		mutex_unlock(&group->group_lock);
> @@ -643,24 +661,8 @@ struct kvm *vfio_group_get_kvm(struct vfio_group *group)
>   	return group->kvm;
>   }
>   
> -void vfio_group_put_kvm(struct vfio_group *group)
> -{
> -	mutex_unlock(&group->group_lock);
> -}
> -
> -void vfio_device_group_finalize_open(struct vfio_device *device)
> +void vfio_device_put_group_kvm(struct vfio_device *device)
>   {
> -	mutex_lock(&device->group->group_lock);
> -	if (device->group->container)
> -		vfio_device_container_register(device);
> -	mutex_unlock(&device->group->group_lock);
> -}
> -
> -void vfio_device_group_abort_open(struct vfio_device *device)
> -{
> -	mutex_lock(&device->group->group_lock);
> -	if (device->group->container)
> -		vfio_device_container_unregister(device);
>   	mutex_unlock(&device->group->group_lock);
>   }
>   
> @@ -779,9 +781,9 @@ bool vfio_file_has_dev(struct file *file, struct vfio_device *device)
>   }
>   EXPORT_SYMBOL_GPL(vfio_file_has_dev);
>   
> -bool vfio_group_has_container(struct vfio_group *group)
> +bool vfio_device_has_container(struct vfio_device *device)
>   {
> -	return group->container;
> +	return device->group->container;
>   }
>   
>   static char *vfio_devnode(struct device *dev, umode_t *mode)
> diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
> index 670c9c5a55f1fc..e69bfcefee400e 100644
> --- a/drivers/vfio/vfio.h
> +++ b/drivers/vfio/vfio.h
> @@ -70,19 +70,16 @@ struct vfio_group {
>   	struct iommufd_ctx		*iommufd;
>   };
>   
> +int vfio_device_set_group(struct vfio_device *device,
> +			  enum vfio_group_type type);
>   void vfio_device_remove_group(struct vfio_device *device);
> -struct vfio_group *vfio_noiommu_group_alloc(struct device *dev,
> -					    enum vfio_group_type type);
> -struct vfio_group *vfio_group_find_or_alloc(struct device *dev);
>   void vfio_device_group_register(struct vfio_device *device);
>   void vfio_device_group_unregister(struct vfio_device *device);
>   int vfio_device_group_use_iommu(struct vfio_device *device);
>   void vfio_device_group_unuse_iommu(struct vfio_device *device);
> -struct kvm *vfio_group_get_kvm(struct vfio_group *group);
> -void vfio_group_put_kvm(struct vfio_group *group);
> -void vfio_device_group_finalize_open(struct vfio_device *device);
> -void vfio_device_group_abort_open(struct vfio_device *device);
> -bool vfio_group_has_container(struct vfio_group *group);
> +struct kvm *vfio_device_get_group_kvm(struct vfio_device *device);
> +void vfio_device_put_group_kvm(struct vfio_device *device);
> +bool vfio_device_has_container(struct vfio_device *device);
>   int __init vfio_group_init(void);
>   void vfio_group_cleanup(void);
>   
> @@ -142,12 +139,12 @@ int vfio_container_attach_group(struct vfio_container *container,
>   void vfio_group_detach_container(struct vfio_group *group);
>   void vfio_device_container_register(struct vfio_device *device);
>   void vfio_device_container_unregister(struct vfio_device *device);
> -int vfio_group_container_pin_pages(struct vfio_group *group,
> +int vfio_device_container_pin_pages(struct vfio_device *device,
>   				   dma_addr_t iova, int npage,
>   				   int prot, struct page **pages);
> -void vfio_group_container_unpin_pages(struct vfio_group *group,
> +void vfio_device_container_unpin_pages(struct vfio_device *device,
>   				      dma_addr_t iova, int npage);
> -int vfio_group_container_dma_rw(struct vfio_group *group,
> +int vfio_device_container_dma_rw(struct vfio_device *device,
>   				dma_addr_t iova, void *data,
>   				size_t len, bool write);
>   
> @@ -187,21 +184,21 @@ static inline void vfio_device_container_unregister(struct vfio_device *device)
>   {
>   }
>   
> -static inline int vfio_group_container_pin_pages(struct vfio_group *group,
> -						 dma_addr_t iova, int npage,
> -						 int prot, struct page **pages)
> +static inline int vfio_device_container_pin_pages(struct vfio_device *device,
> +						  dma_addr_t iova, int npage,
> +						  int prot, struct page **pages)
>   {
>   	return -EOPNOTSUPP;
>   }
>   
> -static inline void vfio_group_container_unpin_pages(struct vfio_group *group,
> -						    dma_addr_t iova, int npage)
> +static inline void vfio_device_container_unpin_pages(struct vfio_device *device,
> +						     dma_addr_t iova, int npage)
>   {
>   }
>   
> -static inline int vfio_group_container_dma_rw(struct vfio_group *group,
> -					      dma_addr_t iova, void *data,
> -					      size_t len, bool write)
> +static inline int vfio_device_container_dma_rw(struct vfio_device *device,
> +					       dma_addr_t iova, void *data,
> +					       size_t len, bool write)
>   {
>   	return -EOPNOTSUPP;
>   }
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index a7b966b4f3fc86..3108e92a5cb20b 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -260,17 +260,10 @@ void vfio_free_device(struct vfio_device *device)
>   EXPORT_SYMBOL_GPL(vfio_free_device);
>   
>   static int __vfio_register_dev(struct vfio_device *device,
> -		struct vfio_group *group)
> +			       enum vfio_group_type type)
>   {
>   	int ret;
>   
> -	/*
> -	 * In all cases group is the output of one of the group allocation
> -	 * functions and we have group->drivers incremented for us.
> -	 */
> -	if (IS_ERR(group))
> -		return PTR_ERR(group);
> -
>   	if (WARN_ON(device->ops->bind_iommufd &&
>   		    (!device->ops->unbind_iommufd ||
>   		     !device->ops->attach_ioas)))
> @@ -283,16 +276,19 @@ static int __vfio_register_dev(struct vfio_device *device,
>   	if (!device->dev_set)
>   		vfio_assign_device_set(device, device);
>   
> -	/* Our reference on group is moved to the device */
> -	device->group = group;
> -
>   	ret = dev_set_name(&device->device, "vfio%d", device->index);
>   	if (ret)
> -		goto err_out;
> +		return ret;
>   
> -	ret = device_add(&device->device);
> +	ret = vfio_device_set_group(device, type);
>   	if (ret)
> -		goto err_out;
> +		return ret;
> +
> +	ret = device_add(&device->device);
> +	if (ret) {
> +		vfio_device_remove_group(device);
> +		return ret;
> +	}
>   
>   	/* Refcounting can't start until the driver calls register */
>   	refcount_set(&device->refcount, 1);
> @@ -300,15 +296,12 @@ static int __vfio_register_dev(struct vfio_device *device,
>   	vfio_device_group_register(device);
>   
>   	return 0;
> -err_out:
> -	vfio_device_remove_group(device);
> -	return ret;
>   }
>   
>   int vfio_register_group_dev(struct vfio_device *device)
>   {
> -	return __vfio_register_dev(device,
> -		vfio_group_find_or_alloc(device->dev));
> +	return __vfio_register_dev(device, VFIO_IOMMU);
> +
>   }
>   EXPORT_SYMBOL_GPL(vfio_register_group_dev);
>   
> @@ -318,8 +311,7 @@ EXPORT_SYMBOL_GPL(vfio_register_group_dev);
>    */
>   int vfio_register_emulated_iommu_dev(struct vfio_device *device)
>   {
> -	return __vfio_register_dev(device,
> -		vfio_noiommu_group_alloc(device->dev, VFIO_EMULATED_IOMMU));
> +	return __vfio_register_dev(device, VFIO_EMULATED_IOMMU);
>   }
>   EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev);
>   
> @@ -386,7 +378,7 @@ static int vfio_device_first_open(struct vfio_device *device)
>   	if (ret)
>   		goto err_module_put;
>   
> -	kvm = vfio_group_get_kvm(device->group);
> +	kvm = vfio_device_get_group_kvm(device);
>   	if (!kvm) {
>   		ret = -EINVAL;
>   		goto err_unuse_iommu;
> @@ -398,12 +390,12 @@ static int vfio_device_first_open(struct vfio_device *device)
>   		if (ret)
>   			goto err_container;
>   	}
> -	vfio_group_put_kvm(device->group);
> +	vfio_device_put_group_kvm(device);
>   	return 0;
>   
>   err_container:
>   	device->kvm = NULL;
> -	vfio_group_put_kvm(device->group);
> +	vfio_device_put_group_kvm(device);
>   err_unuse_iommu:
>   	vfio_device_group_unuse_iommu(device);
>   err_module_put:
> @@ -1199,8 +1191,8 @@ int vfio_pin_pages(struct vfio_device *device, dma_addr_t iova,
>   	/* group->container cannot change while a vfio device is open */
>   	if (!pages || !npage || WARN_ON(!vfio_assert_device_open(device)))
>   		return -EINVAL;
> -	if (vfio_group_has_container(device->group))
> -		return vfio_group_container_pin_pages(device->group, iova,
> +	if (vfio_device_has_container(device))
> +		return vfio_device_container_pin_pages(device, iova,
>   						      npage, prot, pages);
>   	if (device->iommufd_access) {
>   		int ret;
> @@ -1237,8 +1229,8 @@ void vfio_unpin_pages(struct vfio_device *device, dma_addr_t iova, int npage)
>   	if (WARN_ON(!vfio_assert_device_open(device)))
>   		return;
>   
> -	if (vfio_group_has_container(device->group)) {
> -		vfio_group_container_unpin_pages(device->group, iova,
> +	if (vfio_device_has_container(device)) {
> +		vfio_device_container_unpin_pages(device, iova,
>   						 npage);
>   		return;
>   	}
> @@ -1276,9 +1268,9 @@ int vfio_dma_rw(struct vfio_device *device, dma_addr_t iova, void *data,
>   	if (!data || len <= 0 || !vfio_assert_device_open(device))
>   		return -EINVAL;
>   
> -	if (vfio_group_has_container(device->group))
> -		return vfio_group_container_dma_rw(device->group, iova,
> -						   data, len, write);
> +	if (vfio_device_has_container(device))
> +		return vfio_device_container_dma_rw(device, iova, data, len,
> +						    write);
>   
>   	if (device->iommufd_access) {
>   		unsigned int flags = 0;

-- 
Regards,
Yi Liu

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-11-24  3:15 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-23 15:01 [RFC 00/10] Move group specific code into group.c Yi Liu
2022-11-23 15:01 ` [RFC 01/10] vfio: Simplify vfio_create_group() Yi Liu
2022-11-23 15:01 ` [RFC 02/10] vfio: Move the sanity check of the group to vfio_create_group() Yi Liu
2022-11-23 15:01 ` [RFC 03/10] vfio: Wrap group codes to be helpers for __vfio_register_dev() and unregister Yi Liu
2022-11-23 15:01 ` [RFC 04/10] vfio: Make vfio_device_open() group agnostic Yi Liu
2022-11-23 15:01 ` [RFC 05/10] vfio: Move device open/close code to be helpfers Yi Liu
2022-11-23 15:01 ` [RFC 06/10] vfio: Swap order of vfio_device_container_register() and open_device() Yi Liu
2022-11-23 15:01 ` [RFC 07/10] vfio: Refactor vfio_device_first_open() and _last_close() Yi Liu
2022-11-23 15:01 ` [RFC 08/10] vfio: Wrap vfio group module init/clean code into helpers Yi Liu
2022-11-23 15:01 ` [RFC 09/10] vfio: Refactor dma APIs for emulated devices Yi Liu
2022-11-23 16:51   ` Jason Gunthorpe
2022-11-24  3:05     ` Yi Liu
2022-11-23 15:01 ` [RFC 10/10] vfio: Move vfio group specific code into group.c Yi Liu
2022-11-23 18:37   ` Jason Gunthorpe
2022-11-24  3:06     ` Yi Liu
2022-11-23 18:41 ` [RFC 00/10] Move " Jason Gunthorpe
2022-11-24  3:15   ` Yi Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.