llvm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/17] Consolidate the error handling around device attachment
@ 2023-04-12 13:51 Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 01/17] iommu: Replace iommu_group_device_count() with list_count_nodes() Jason Gunthorpe
                   ` (17 more replies)
  0 siblings, 18 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Device attachment has a bunch of different flows open coded in different
ways throughout the code.

One of the things that became apparently recently is that error handling
is important and we do need to consistently treat errors during attach and
have some strategy to unwind back to a safe state.

Implement a single algorithm for this in one function. It will call each
device's attach, if it fails it will try to go back to the prior domain or
as a contingency against a UAF crash try to go to a blocking domain.

As part of this we consolidate how the default domain is created and
attached as well into one place with a consistent flow.

The new worker functions are called __iommu_device_set_domain() and
__iommu_group_set_domain_internal(), each has sensible error handling
internally. At the end __iommu_group_set_domain_internal() is the only
function that stores to group->domain, and must be called to change this
value.

Some flags tell the intent of the caller, if the caller cannot accept a
failure, or if the caller is a first attach and wants to do the deferred
logic.

Several of the confusing workflows where we store things in group->domain
or group->default_domain before they are fully setup are removed.

This has a followup series that does a similar de-duplication to the probe
path:

https://github.com/jgunthorpe/linux/commits/iommu_err_unwind

v4:
 - Update comments and commit messages
 - New patch "Remove iommu_group_do_dma_first_attach() from iommu_group_add_device()"
 - Redo "Replace __iommu_group_dma_first_attach() with set_domain"
   to avoid the flag
 - Drop "Make iommu_group_do_dma_first_attach() work with owned groups"
   since the fix happens implicitly now
 - Use __iommu_group_set_domain() instead of
   __iommu_group_set_domain_internal() in most places
 - Make sure iommu_create_device_direct_mappings() is always called to
   update the default domain, even if the current attached domain is not
   the default domain
 - Make the error case for inconsistent iommu_get_def_domain_type()'s
   always use iommu_def_domain_type
 - Handle errrors for the first default domain attach more cleanly
v3: https://lore.kernel.org/r/0-v3-e89a9bb522f5+8c87-iommu_err_unwind_jgg@nvidia.com
 - New patch to do iommu_group_create_direct_mappings() before
   attach on the hotplug path based on Lu and Robin's remarks
 - Fix to return 0 if the group has conflicting default domain types like
   the original code
 - Put iommu_group_create_direct_mappings() before attach in setup_domains
 - Split up the alloc changes from the setup_domain patch to their own
   patch, implement Robin's point about how the iommu_def_domain_type should
   work
 - New patch to optionally do iommu_group_create_direct_mappings() after
   attach
 - Reword the setup_domain patch's commit message
v2: https://lore.kernel.org/r/0-v2-cd32667d2ba6+70bd1-iommu_err_unwind_jgg@nvidia.com
 - New patch to remove iommu_group_device_count()
 - New patch to add a for_each helper: for_each_group_device()
 - Rebase on Joerg's tree
 - IOMMU_SET_DOMAIN_MUST_SUCCEED instead of true
 - Split patch to fix owned groups during first attach
 - Change iommu_create_device_direct_mappings to accept a domain not a
   group
 - Significantly revise the "iommu: Consolidate the default_domain setup to
   one function" patch to de-duplicate the domain type calculation logic
   too
 - New patch to clean the flow inside iommu_group_store_type()
v1: https://lore.kernel.org/r/0-v1-20507a7e6b7e+2d6-iommu_err_unwind_jgg@nvidia.com

Jason Gunthorpe (17):
  iommu: Replace iommu_group_device_count() with list_count_nodes()
  iommu: Add for_each_group_device()
  iommu: Make __iommu_group_set_domain() handle error unwind
  iommu: Use __iommu_group_set_domain() for __iommu_attach_group()
  iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain()
  iommu: Replace __iommu_group_dma_first_attach() with set_domain
  iommu: Remove iommu_group_do_dma_first_attach() from
    iommu_group_add_device()
  iommu: Replace iommu_group_do_dma_first_attach with
    __iommu_device_set_domain
  iommu: Fix iommu_probe_device() to attach the right domain
  iommu: Do iommu_group_create_direct_mappings() before attach
  iommu: Remove the assignment of group->domain during default domain
    alloc
  iommu: Consolidate the code to calculate the target default domain
    type
  iommu: Revise iommu_group_alloc_default_domain()
  iommu: Consolidate the default_domain setup to one function
  iommu: Allow IOMMU_RESV_DIRECT to work on ARM
  iommu: Remove __iommu_group_for_each_dev()
  iommu: Tidy the control flow in iommu_group_store_type()

 .clang-format         |   1 +
 drivers/iommu/iommu.c | 669 +++++++++++++++++++++---------------------
 2 files changed, 331 insertions(+), 339 deletions(-)


base-commit: 3578e36f238ef81a1967e849e0a3106c9dd37e68
-- 
2.40.0


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v4 01/17] iommu: Replace iommu_group_device_count() with list_count_nodes()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 02/17] iommu: Add for_each_group_device() Jason Gunthorpe
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

No reason to wrapper a standard function, just call the library directly.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7abee83610b6c9..8cdf47279704c2 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1149,17 +1149,6 @@ void iommu_group_remove_device(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(iommu_group_remove_device);
 
-static int iommu_group_device_count(struct iommu_group *group)
-{
-	struct group_device *entry;
-	int ret = 0;
-
-	list_for_each_entry(entry, &group->devices, list)
-		ret++;
-
-	return ret;
-}
-
 static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,
 				      int (*fn)(struct device *, void *))
 {
@@ -2101,7 +2090,7 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
 	 */
 	mutex_lock(&group->mutex);
 	ret = -EINVAL;
-	if (iommu_group_device_count(group) != 1)
+	if (list_count_nodes(&group->devices) != 1)
 		goto out_unlock;
 
 	ret = __iommu_attach_group(domain, group);
@@ -2132,7 +2121,7 @@ void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
 
 	mutex_lock(&group->mutex);
 	if (WARN_ON(domain != group->domain) ||
-	    WARN_ON(iommu_group_device_count(group) != 1))
+	    WARN_ON(list_count_nodes(&group->devices) != 1))
 		goto out_unlock;
 	__iommu_group_set_core_domain(group);
 
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 02/17] iommu: Add for_each_group_device()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 01/17] iommu: Replace iommu_group_device_count() with list_count_nodes() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 03/17] iommu: Make __iommu_group_set_domain() handle error unwind Jason Gunthorpe
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Convenience macro to iterate over every struct group_device in the group.

Replace all open coded list_for_each_entry's with this macro.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 .clang-format         |  1 +
 drivers/iommu/iommu.c | 16 ++++++++++------
 2 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/.clang-format b/.clang-format
index d988e9fa9b2653..bece1995f2c159 100644
--- a/.clang-format
+++ b/.clang-format
@@ -254,6 +254,7 @@ ForEachMacros:
   - 'for_each_free_mem_range'
   - 'for_each_free_mem_range_reverse'
   - 'for_each_func_rsrc'
+  - 'for_each_group_device'
   - 'for_each_group_evsel'
   - 'for_each_group_member'
   - 'for_each_hstate'
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8cdf47279704c2..501257bd02fc3c 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -67,6 +67,10 @@ struct group_device {
 	char *name;
 };
 
+/* Iterate over each struct group_device in a struct iommu_group */
+#define for_each_group_device(group, pos) \
+	list_for_each_entry(pos, &(group)->devices, list)
+
 struct iommu_group_attribute {
 	struct attribute attr;
 	ssize_t (*show)(struct iommu_group *group, char *buf);
@@ -463,7 +467,7 @@ __iommu_group_remove_device(struct iommu_group *group, struct device *dev)
 	struct group_device *device;
 
 	lockdep_assert_held(&group->mutex);
-	list_for_each_entry(device, &group->devices, list) {
+	for_each_group_device(group, device) {
 		if (device->dev == dev) {
 			list_del(&device->list);
 			return device;
@@ -702,7 +706,7 @@ int iommu_get_group_resv_regions(struct iommu_group *group,
 	int ret = 0;
 
 	mutex_lock(&group->mutex);
-	list_for_each_entry(device, &group->devices, list) {
+	for_each_group_device(group, device) {
 		struct list_head dev_resv_regions;
 
 		/*
@@ -1155,7 +1159,7 @@ static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,
 	struct group_device *device;
 	int ret = 0;
 
-	list_for_each_entry(device, &group->devices, list) {
+	for_each_group_device(group, device) {
 		ret = fn(device->dev, data);
 		if (ret)
 			break;
@@ -1959,7 +1963,7 @@ bool iommu_group_has_isolated_msi(struct iommu_group *group)
 	bool ret = true;
 
 	mutex_lock(&group->mutex);
-	list_for_each_entry(group_dev, &group->devices, list)
+	for_each_group_device(group, group_dev)
 		ret &= msi_device_has_isolated_msi(group_dev->dev);
 	mutex_unlock(&group->mutex);
 	return ret;
@@ -3261,7 +3265,7 @@ static int __iommu_set_group_pasid(struct iommu_domain *domain,
 	struct group_device *device;
 	int ret = 0;
 
-	list_for_each_entry(device, &group->devices, list) {
+	for_each_group_device(group, device) {
 		ret = domain->ops->set_dev_pasid(domain, device->dev, pasid);
 		if (ret)
 			break;
@@ -3276,7 +3280,7 @@ static void __iommu_remove_group_pasid(struct iommu_group *group,
 	struct group_device *device;
 	const struct iommu_ops *ops;
 
-	list_for_each_entry(device, &group->devices, list) {
+	for_each_group_device(group, device) {
 		ops = dev_iommu_ops(device->dev);
 		ops->remove_dev_pasid(device->dev, pasid);
 	}
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 03/17] iommu: Make __iommu_group_set_domain() handle error unwind
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 01/17] iommu: Replace iommu_group_device_count() with list_count_nodes() Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 02/17] iommu: Add for_each_group_device() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 04/17] iommu: Use __iommu_group_set_domain() for __iommu_attach_group() Jason Gunthorpe
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Let's try to have a consistent and clear strategy for error handling
during domain attach failures.

There are two broad categories, the first is callers doing destruction and
trying to set the domain back to a previously good domain. These cases
cannot handle failure during destruction flows and must succeed, or at
least avoid a UAF on the current group->domain which is likely about to be
freed.

Many of the drivers are well behaved here and will not hit the WARN_ON's
or a UAF, but some are doing hypercalls/etc that can fail unpredictably
and don't meet the expectations.

The second case is attaching a domain for the first time in a failable
context, failure should restore the attachment back to group->domain using
the above unfailable operation.

Have __iommu_group_set_domain_internal() execute a common algorithm that
tries to achieve this, and in the worst case, would leave a device
"detached" or assigned to a global blocking domain. This relies on some
existing common driver behaviors where attach failure will also do detatch
and true IOMMU_DOMAIN_BLOCK implementations that are not allowed to ever
fail.

Name the first case with __iommu_group_set_domain_nofail() to make it
clear.

Pull all the error handling and WARN_ON generation into
__iommu_group_set_domain_internal().

Avoid the obfuscating use of __iommu_group_for_each_dev() and be more
careful about what should happen during failures by only touching devices
we've already touched.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 137 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 112 insertions(+), 25 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 501257bd02fc3c..ae396c1d075862 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -99,8 +99,26 @@ static int __iommu_attach_device(struct iommu_domain *domain,
 				 struct device *dev);
 static int __iommu_attach_group(struct iommu_domain *domain,
 				struct iommu_group *group);
+
+enum {
+	IOMMU_SET_DOMAIN_MUST_SUCCEED = 1 << 0,
+};
+
+static int __iommu_group_set_domain_internal(struct iommu_group *group,
+					     struct iommu_domain *new_domain,
+					     unsigned int flags);
 static int __iommu_group_set_domain(struct iommu_group *group,
-				    struct iommu_domain *new_domain);
+				    struct iommu_domain *new_domain)
+{
+	return __iommu_group_set_domain_internal(group, new_domain, 0);
+}
+static void __iommu_group_set_domain_nofail(struct iommu_group *group,
+					    struct iommu_domain *new_domain)
+{
+	WARN_ON(__iommu_group_set_domain_internal(
+		group, new_domain, IOMMU_SET_DOMAIN_MUST_SUCCEED));
+}
+
 static int iommu_create_device_direct_mappings(struct iommu_group *group,
 					       struct device *dev);
 static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
@@ -2040,15 +2058,13 @@ EXPORT_SYMBOL_GPL(iommu_domain_free);
 static void __iommu_group_set_core_domain(struct iommu_group *group)
 {
 	struct iommu_domain *new_domain;
-	int ret;
 
 	if (group->owner)
 		new_domain = group->blocking_domain;
 	else
 		new_domain = group->default_domain;
 
-	ret = __iommu_group_set_domain(group, new_domain);
-	WARN(ret, "iommu driver failed to attach the default/blocking domain");
+	__iommu_group_set_domain_nofail(group, new_domain);
 }
 
 static int __iommu_attach_device(struct iommu_domain *domain,
@@ -2233,21 +2249,55 @@ int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group)
 }
 EXPORT_SYMBOL_GPL(iommu_attach_group);
 
-static int iommu_group_do_set_platform_dma(struct device *dev, void *data)
+static int __iommu_device_set_domain(struct iommu_group *group,
+				     struct device *dev,
+				     struct iommu_domain *new_domain,
+				     unsigned int flags)
 {
-	const struct iommu_ops *ops = dev_iommu_ops(dev);
-
-	if (!WARN_ON(!ops->set_platform_dma_ops))
-		ops->set_platform_dma_ops(dev);
+	int ret;
 
+	ret = __iommu_attach_device(new_domain, dev);
+	if (ret) {
+		/*
+		 * If we have a blocking domain then try to attach that in hopes
+		 * of avoiding a UAF. Modern drivers should implement blocking
+		 * domains as global statics that cannot fail.
+		 */
+		if ((flags & IOMMU_SET_DOMAIN_MUST_SUCCEED) &&
+		    group->blocking_domain &&
+		    group->blocking_domain != new_domain)
+			__iommu_attach_device(group->blocking_domain, dev);
+		return ret;
+	}
 	return 0;
 }
 
-static int __iommu_group_set_domain(struct iommu_group *group,
-				    struct iommu_domain *new_domain)
+/*
+ * If 0 is returned the group's domain is new_domain. If an error is returned
+ * then the group's domain will be set back to the existing domain unless
+ * IOMMU_SET_DOMAIN_MUST_SUCCEED, otherwise an error is returned and the group's
+ * domains is left inconsistent. This is a driver bug to fail attach with a
+ * previously good domain. We try to avoid a kernel UAF because of this.
+ *
+ * IOMMU groups are really the natural working unit of the IOMMU, but the IOMMU
+ * API works on domains and devices.  Bridge that gap by iterating over the
+ * devices in a group.  Ideally we'd have a single device which represents the
+ * requestor ID of the group, but we also allow IOMMU drivers to create policy
+ * defined minimum sets, where the physical hardware may be able to distiguish
+ * members, but we wish to group them at a higher level (ex. untrusted
+ * multi-function PCI devices).  Thus we attach each device.
+ */
+static int __iommu_group_set_domain_internal(struct iommu_group *group,
+					     struct iommu_domain *new_domain,
+					     unsigned int flags)
 {
+	struct group_device *last_gdev;
+	struct group_device *gdev;
+	int result;
 	int ret;
 
+	lockdep_assert_held(&group->mutex);
+
 	if (group->domain == new_domain)
 		return 0;
 
@@ -2257,8 +2307,12 @@ static int __iommu_group_set_domain(struct iommu_group *group,
 	 * platform specific behavior.
 	 */
 	if (!new_domain) {
-		__iommu_group_for_each_dev(group, NULL,
-					   iommu_group_do_set_platform_dma);
+		for_each_group_device(group, gdev) {
+			const struct iommu_ops *ops = dev_iommu_ops(gdev->dev);
+
+			if (!WARN_ON(!ops->set_platform_dma_ops))
+				ops->set_platform_dma_ops(gdev->dev);
+		}
 		group->domain = NULL;
 		return 0;
 	}
@@ -2268,16 +2322,52 @@ static int __iommu_group_set_domain(struct iommu_group *group,
 	 * domain. This switch does not have to be atomic and DMA can be
 	 * discarded during the transition. DMA must only be able to access
 	 * either new_domain or group->domain, never something else.
-	 *
-	 * Note that this is called in error unwind paths, attaching to a
-	 * domain that has already been attached cannot fail.
 	 */
-	ret = __iommu_group_for_each_dev(group, new_domain,
-					 iommu_group_do_attach_device);
-	if (ret)
-		return ret;
+	result = 0;
+	for_each_group_device(group, gdev) {
+		ret = __iommu_device_set_domain(group, gdev->dev, new_domain,
+						flags);
+		if (ret) {
+			result = ret;
+			/*
+			 * Keep trying the other devices in the group. If a
+			 * driver fails attach to an otherwise good domain, and
+			 * does not support blocking domains, it should at least
+			 * drop its reference on the current domain so we don't
+			 * UAF.
+			 */
+			if (flags & IOMMU_SET_DOMAIN_MUST_SUCCEED)
+				continue;
+			goto err_revert;
+		}
+	}
 	group->domain = new_domain;
-	return 0;
+	return result;
+
+err_revert:
+	/*
+	 * This is called in error unwind paths. A well behaved driver should
+	 * always allow us to attach to a domain that was already attached.
+	 */
+	last_gdev = gdev;
+	for_each_group_device(group, gdev) {
+		const struct iommu_ops *ops = dev_iommu_ops(gdev->dev);
+
+		/*
+		 * If set_platform_dma_ops is not present a NULL domain can
+		 * happen only for first probe, in which case we leave
+		 * group->domain as NULL and let release clean everything up.
+		 */
+		if (group->domain)
+			WARN_ON(__iommu_device_set_domain(
+				group, gdev->dev, group->domain,
+				IOMMU_SET_DOMAIN_MUST_SUCCEED));
+		else if (ops->set_platform_dma_ops)
+			ops->set_platform_dma_ops(gdev->dev);
+		if (gdev == last_gdev)
+			break;
+	}
+	return ret;
 }
 
 void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group)
@@ -3194,16 +3284,13 @@ EXPORT_SYMBOL_GPL(iommu_device_claim_dma_owner);
 
 static void __iommu_release_dma_ownership(struct iommu_group *group)
 {
-	int ret;
-
 	if (WARN_ON(!group->owner_cnt || !group->owner ||
 		    !xa_empty(&group->pasid_array)))
 		return;
 
 	group->owner_cnt = 0;
 	group->owner = NULL;
-	ret = __iommu_group_set_domain(group, group->default_domain);
-	WARN(ret, "iommu driver failed to attach the default domain");
+	__iommu_group_set_domain_nofail(group, group->default_domain);
 }
 
 /**
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 04/17] iommu: Use __iommu_group_set_domain() for __iommu_attach_group()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (2 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 03/17] iommu: Make __iommu_group_set_domain() handle error unwind Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 05/17] iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain() Jason Gunthorpe
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

The error recovery here matches the recovery inside
__iommu_group_set_domain(), so just use it directly.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 40 +---------------------------------------
 1 file changed, 1 insertion(+), 39 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index ae396c1d075862..2b1fc060305ce6 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2177,52 +2177,14 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev)
 	return dev->iommu_group->default_domain;
 }
 
-/*
- * IOMMU groups are really the natural working unit of the IOMMU, but
- * the IOMMU API works on domains and devices.  Bridge that gap by
- * iterating over the devices in a group.  Ideally we'd have a single
- * device which represents the requestor ID of the group, but we also
- * allow IOMMU drivers to create policy defined minimum sets, where
- * the physical hardware may be able to distiguish members, but we
- * wish to group them at a higher level (ex. untrusted multi-function
- * PCI devices).  Thus we attach each device.
- */
-static int iommu_group_do_attach_device(struct device *dev, void *data)
-{
-	struct iommu_domain *domain = data;
-
-	return __iommu_attach_device(domain, dev);
-}
-
 static int __iommu_attach_group(struct iommu_domain *domain,
 				struct iommu_group *group)
 {
-	int ret;
-
 	if (group->domain && group->domain != group->default_domain &&
 	    group->domain != group->blocking_domain)
 		return -EBUSY;
 
-	ret = __iommu_group_for_each_dev(group, domain,
-					 iommu_group_do_attach_device);
-	if (ret == 0) {
-		group->domain = domain;
-	} else {
-		/*
-		 * To recover from the case when certain device within the
-		 * group fails to attach to the new domain, we need force
-		 * attaching all devices back to the old domain. The old
-		 * domain is compatible for all devices in the group,
-		 * hence the iommu driver should always return success.
-		 */
-		struct iommu_domain *old_domain = group->domain;
-
-		group->domain = NULL;
-		WARN(__iommu_group_set_domain(group, old_domain),
-		     "iommu driver failed to attach a compatible domain");
-	}
-
-	return ret;
+	return __iommu_group_set_domain(group, domain);
 }
 
 /**
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 05/17] iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (3 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 04/17] iommu: Use __iommu_group_set_domain() for __iommu_attach_group() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain Jason Gunthorpe
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

This is missing re-attach error handling if the attach fails, use the
common code.

The ugly "group->domain = prev_domain" will be cleaned in a later patch.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2b1fc060305ce6..817d90081b4db7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2964,11 +2964,12 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 	if (ret)
 		goto restore_old_domain;
 
-	ret = iommu_group_create_direct_mappings(group);
+	group->domain = prev_dom;
+	ret = iommu_create_device_direct_mappings(group, dev);
 	if (ret)
 		goto free_new_domain;
 
-	ret = __iommu_attach_group(group->default_domain, group);
+	ret = __iommu_group_set_domain(group, group->default_domain);
 	if (ret)
 		goto free_new_domain;
 
@@ -2980,7 +2981,6 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 	iommu_domain_free(group->default_domain);
 restore_old_domain:
 	group->default_domain = prev_dom;
-	group->domain = prev_dom;
 
 	return ret;
 }
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (4 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 05/17] iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-13  2:35   ` Baolu Lu
  2023-04-20  7:25   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device() Jason Gunthorpe
                   ` (11 subsequent siblings)
  17 siblings, 2 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Reorganize the attach_deferred logic to set dev->iommu->attach_deferred
immediately during probe and then have __iommu_device_set_domain() check
it and not attach the default_domain.

This is to prepare for removing the group->domain set from
iommu_group_alloc_default_domain() by calling __iommu_group_set_domain()
to set the group->domain.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 32 +++++++++++---------------------
 1 file changed, 11 insertions(+), 21 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 817d90081b4db7..b6e6140119731a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -360,6 +360,8 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
 
 	dev->iommu->iommu_dev = iommu_dev;
 	dev->iommu->max_pasids = dev_iommu_get_max_pasids(dev);
+	if (ops->is_attach_deferred)
+		dev->iommu->attach_deferred = ops->is_attach_deferred(dev);
 
 	group = iommu_group_get_for_dev(dev);
 	if (IS_ERR(group)) {
@@ -394,27 +396,14 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
 	return ret;
 }
 
-static bool iommu_is_attach_deferred(struct device *dev)
-{
-	const struct iommu_ops *ops = dev_iommu_ops(dev);
-
-	if (ops->is_attach_deferred)
-		return ops->is_attach_deferred(dev);
-
-	return false;
-}
-
 static int iommu_group_do_dma_first_attach(struct device *dev, void *data)
 {
 	struct iommu_domain *domain = data;
 
 	lockdep_assert_held(&dev->iommu_group->mutex);
 
-	if (iommu_is_attach_deferred(dev)) {
-		dev->iommu->attach_deferred = 1;
+	if (dev->iommu->attach_deferred)
 		return 0;
-	}
-
 	return __iommu_attach_device(domain, dev);
 }
 
@@ -1855,12 +1844,6 @@ static void probe_alloc_default_domain(struct bus_type *bus,
 
 }
 
-static int __iommu_group_dma_first_attach(struct iommu_group *group)
-{
-	return __iommu_group_for_each_dev(group, group->default_domain,
-					  iommu_group_do_dma_first_attach);
-}
-
 static int iommu_group_do_probe_finalize(struct device *dev, void *data)
 {
 	const struct iommu_ops *ops = dev_iommu_ops(dev);
@@ -1923,7 +1906,8 @@ int bus_iommu_probe(struct bus_type *bus)
 
 		iommu_group_create_direct_mappings(group);
 
-		ret = __iommu_group_dma_first_attach(group);
+		group->domain = NULL;
+		ret = __iommu_group_set_domain(group, group->default_domain);
 
 		mutex_unlock(&group->mutex);
 
@@ -2218,6 +2202,12 @@ static int __iommu_device_set_domain(struct iommu_group *group,
 {
 	int ret;
 
+	if (dev->iommu->attach_deferred) {
+		if (new_domain == group->default_domain)
+			return 0;
+		dev->iommu->attach_deferred = 0;
+	}
+
 	ret = __iommu_attach_device(new_domain, dev);
 	if (ret) {
 		/*
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (5 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  7:26   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 08/17] iommu: Replace iommu_group_do_dma_first_attach with __iommu_device_set_domain Jason Gunthorpe
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

This function is only used to construct the groups, it should not be
operating the iommu driver.

External callers in VFIO and POWER do not have any iommu drivers on the
devices so group->domain will be NULL.

The only internal caller is from iommu_probe_device() which already calls
iommu_group_do_dma_first_attach(), meaning we are calling it twice in the
only case it matters.

Since iommu_probe_device() is the logical place to sort out the group's
domain, remove the call from iommu_group_add_device().

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b6e6140119731a..24b0290d16ffd4 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1104,25 +1104,13 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
 
 	mutex_lock(&group->mutex);
 	list_add_tail(&device->list, &group->devices);
-	if (group->domain)
-		ret = iommu_group_do_dma_first_attach(dev, group->domain);
 	mutex_unlock(&group->mutex);
-	if (ret)
-		goto err_put_group;
-
 	trace_add_device_to_group(group->id, dev);
 
 	dev_info(dev, "Adding to iommu group %d\n", group->id);
 
 	return 0;
 
-err_put_group:
-	mutex_lock(&group->mutex);
-	list_del(&device->list);
-	mutex_unlock(&group->mutex);
-	dev->iommu_group = NULL;
-	kobject_put(group->devices_kobj);
-	sysfs_remove_link(group->devices_kobj, device->name);
 err_free_name:
 	kfree(device->name);
 err_remove_link:
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 08/17] iommu: Replace iommu_group_do_dma_first_attach with __iommu_device_set_domain
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (6 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 09/17] iommu: Fix iommu_probe_device() to attach the right domain Jason Gunthorpe
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Since __iommu_device_set_domain() now knows how to handle deferred attach
we can just call it directly from the only call site.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 24b0290d16ffd4..72c3a9c8e4751d 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -104,6 +104,10 @@ enum {
 	IOMMU_SET_DOMAIN_MUST_SUCCEED = 1 << 0,
 };
 
+static int __iommu_device_set_domain(struct iommu_group *group,
+				     struct device *dev,
+				     struct iommu_domain *new_domain,
+				     unsigned int flags);
 static int __iommu_group_set_domain_internal(struct iommu_group *group,
 					     struct iommu_domain *new_domain,
 					     unsigned int flags);
@@ -396,17 +400,6 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
 	return ret;
 }
 
-static int iommu_group_do_dma_first_attach(struct device *dev, void *data)
-{
-	struct iommu_domain *domain = data;
-
-	lockdep_assert_held(&dev->iommu_group->mutex);
-
-	if (dev->iommu->attach_deferred)
-		return 0;
-	return __iommu_attach_device(domain, dev);
-}
-
 int iommu_probe_device(struct device *dev)
 {
 	const struct iommu_ops *ops;
@@ -437,7 +430,7 @@ int iommu_probe_device(struct device *dev)
 	 * attach the default domain.
 	 */
 	if (group->default_domain && !group->owner) {
-		ret = iommu_group_do_dma_first_attach(dev, group->default_domain);
+		ret = __iommu_device_set_domain(group, dev, group->domain, 0);
 		if (ret) {
 			mutex_unlock(&group->mutex);
 			iommu_group_put(group);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 09/17] iommu: Fix iommu_probe_device() to attach the right domain
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (7 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 08/17] iommu: Replace iommu_group_do_dma_first_attach with __iommu_device_set_domain Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach Jason Gunthorpe
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

The general invariant is that all devices in an iommu_group are attached
to group->domain. We missed some cases here where an owned group would not
get the device attached.

Rework this logic so it follows the default domain flow of the
bus_iommu_probe() - call iommu_alloc_default_domain(), then use
__iommu_group_set_domain_internal() to set up all the devices.

Finally always attach the device to the current domain if it is already
set.

This is an unlikely functional issue as iommufd uses iommu_attach_group().
It is possible to hot plug in a new group member, add a vfio driver to it
and then hot add it to an existing iommufd. In this case it is required
that the core code set the iommu_domain properly since iommufd won't call
iommu_attach_group() again.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 44 +++++++++++++++++++++++--------------------
 1 file changed, 24 insertions(+), 20 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 72c3a9c8e4751d..9076fbbb7d50cd 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -416,27 +416,31 @@ int iommu_probe_device(struct device *dev)
 		goto err_release;
 	}
 
-	/*
-	 * Try to allocate a default domain - needs support from the
-	 * IOMMU driver. There are still some drivers which don't
-	 * support default domains, so the return value is not yet
-	 * checked.
-	 */
 	mutex_lock(&group->mutex);
-	iommu_alloc_default_domain(group, dev);
 
-	/*
-	 * If device joined an existing group which has been claimed, don't
-	 * attach the default domain.
-	 */
-	if (group->default_domain && !group->owner) {
+	if (group->domain) {
 		ret = __iommu_device_set_domain(group, dev, group->domain, 0);
-		if (ret) {
-			mutex_unlock(&group->mutex);
-			iommu_group_put(group);
-			goto err_release;
-		}
+	} else if (!group->default_domain) {
+		/*
+		 * Try to allocate a default domain - needs support from the
+		 * IOMMU driver. There are still some drivers which don't
+		 * support default domains, so the return value is not yet
+		 * checked.
+		 */
+		iommu_alloc_default_domain(group, dev);
+		group->domain = NULL;
+		if (group->default_domain)
+			ret = __iommu_group_set_domain(group,
+						       group->default_domain);
+
+		/*
+		 * We assume that the iommu driver starts up the device in
+		 * 'set_platform_dma_ops' mode if it does not support default
+		 * domains.
+		 */
 	}
+	if (ret)
+		goto err_unlock;
 
 	iommu_create_device_direct_mappings(group, dev);
 
@@ -449,6 +453,9 @@ int iommu_probe_device(struct device *dev)
 
 	return 0;
 
+err_unlock:
+	mutex_unlock(&group->mutex);
+	iommu_group_put(group);
 err_release:
 	iommu_release_device(dev);
 
@@ -1689,9 +1696,6 @@ static int iommu_alloc_default_domain(struct iommu_group *group,
 {
 	unsigned int type;
 
-	if (group->default_domain)
-		return 0;
-
 	type = iommu_get_def_domain_type(dev) ? : iommu_def_domain_type;
 
 	return iommu_group_alloc_default_domain(dev->bus, group, type);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (8 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 09/17] iommu: Fix iommu_probe_device() to attach the right domain Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  7:29   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 11/17] iommu: Remove the assignment of group->domain during default domain alloc Jason Gunthorpe
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

The iommu_probe_device() path calls iommu_create_device_direct_mappings()
after attaching the device.

IOMMU_RESV_DIRECT maps need to be continually in place, so if a hotplugged
device has new ranges the should have been mapped into the default domain
before it is attached.

Move the iommu_create_device_direct_mappings() call up.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 9076fbbb7d50cd..f0dcc7a559b55a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -418,6 +418,8 @@ int iommu_probe_device(struct device *dev)
 
 	mutex_lock(&group->mutex);
 
+	iommu_create_device_direct_mappings(group, dev);
+
 	if (group->domain) {
 		ret = __iommu_device_set_domain(group, dev, group->domain, 0);
 	} else if (!group->default_domain) {
@@ -429,9 +431,11 @@ int iommu_probe_device(struct device *dev)
 		 */
 		iommu_alloc_default_domain(group, dev);
 		group->domain = NULL;
-		if (group->default_domain)
+		if (group->default_domain) {
+			iommu_create_device_direct_mappings(group, dev);
 			ret = __iommu_group_set_domain(group,
 						       group->default_domain);
+		}
 
 		/*
 		 * We assume that the iommu driver starts up the device in
@@ -442,8 +446,6 @@ int iommu_probe_device(struct device *dev)
 	if (ret)
 		goto err_unlock;
 
-	iommu_create_device_direct_mappings(group, dev);
-
 	mutex_unlock(&group->mutex);
 	iommu_group_put(group);
 
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 11/17] iommu: Remove the assignment of group->domain during default domain alloc
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (9 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-12 13:51 ` [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type Jason Gunthorpe
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

group->domain should only be set once all the device's drivers have
had their ops->attach_dev() called. iommu_group_alloc_default_domain()
doesn't do this, so it shouldn't set the value.

The previous patches organized things so that each caller of
iommu_group_alloc_default_domain() follows up with calling
__iommu_group_set_domain_internal() that does set the group->domain.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index f0dcc7a559b55a..57697cc877de2f 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -430,7 +430,6 @@ int iommu_probe_device(struct device *dev)
 		 * checked.
 		 */
 		iommu_alloc_default_domain(group, dev);
-		group->domain = NULL;
 		if (group->default_domain) {
 			iommu_create_device_direct_mappings(group, dev);
 			ret = __iommu_group_set_domain(group,
@@ -1688,8 +1687,6 @@ static int iommu_group_alloc_default_domain(struct bus_type *bus,
 		return -ENOMEM;
 
 	group->default_domain = dom;
-	if (!group->domain)
-		group->domain = dom;
 	return 0;
 }
 
@@ -1893,7 +1890,6 @@ int bus_iommu_probe(struct bus_type *bus)
 
 		iommu_group_create_direct_mappings(group);
 
-		group->domain = NULL;
 		ret = __iommu_group_set_domain(group, group->default_domain);
 
 		mutex_unlock(&group->mutex);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (10 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 11/17] iommu: Remove the assignment of group->domain during default domain alloc Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  7:30   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain() Jason Gunthorpe
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Put all the code to calculate the default domain type into one
function. Make the function able to handle the
iommu_change_dev_def_domain() by taking in the target domain type and
erroring out if the target type isn't reachable.

This makes it really clear that specifying a 0 type during
iommu_change_dev_def_domain() will have the same outcome as the normal
probe path.

Remove the obfuscating use of __iommu_group_for_each_dev() and related
struct __group_domain_type.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 88 +++++++++++++++++--------------------------
 1 file changed, 35 insertions(+), 53 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 57697cc877de2f..df82d81e92f24b 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1782,50 +1782,43 @@ static int iommu_bus_notifier(struct notifier_block *nb,
 	return 0;
 }
 
-struct __group_domain_type {
-	struct device *dev;
-	unsigned int type;
-};
-
-static int probe_get_default_domain_type(struct device *dev, void *data)
+/* A target_type of 0 will select the best domain type and cannot fail */
+static int iommu_get_default_domain_type(struct iommu_group *group,
+					 int target_type)
 {
-	struct __group_domain_type *gtype = data;
-	unsigned int type = iommu_get_def_domain_type(dev);
+	int best_type = target_type;
+	struct group_device *gdev;
+	struct device *last_dev;
 
-	if (type) {
-		if (gtype->type && gtype->type != type) {
-			dev_warn(dev, "Device needs domain type %s, but device %s in the same iommu group requires type %s - using default\n",
-				 iommu_domain_type_str(type),
-				 dev_name(gtype->dev),
-				 iommu_domain_type_str(gtype->type));
-			gtype->type = 0;
-		}
+	lockdep_assert_held(&group->mutex);
 
-		if (!gtype->dev) {
-			gtype->dev  = dev;
-			gtype->type = type;
+	for_each_group_device(group, gdev) {
+		unsigned int type = iommu_get_def_domain_type(gdev->dev);
+
+		if (best_type && type && best_type != type) {
+			if (target_type) {
+				dev_err_ratelimited(
+					gdev->dev,
+					"Device cannot be in %s domain\n",
+					iommu_domain_type_str(target_type));
+				return -1;
+			}
+
+			dev_warn(
+				gdev->dev,
+				"Device needs domain type %s, but device %s in the same iommu group requires type %s - using default\n",
+				iommu_domain_type_str(type), dev_name(last_dev),
+				iommu_domain_type_str(best_type));
+			return iommu_def_domain_type;
 		}
+		if (!best_type)
+			best_type = type;
+		last_dev = gdev->dev;
 	}
 
-	return 0;
-}
-
-static void probe_alloc_default_domain(struct bus_type *bus,
-				       struct iommu_group *group)
-{
-	struct __group_domain_type gtype;
-
-	memset(&gtype, 0, sizeof(gtype));
-
-	/* Ask for default domain requirements of all devices in the group */
-	__iommu_group_for_each_dev(group, &gtype,
-				   probe_get_default_domain_type);
-
-	if (!gtype.type)
-		gtype.type = iommu_def_domain_type;
-
-	iommu_group_alloc_default_domain(bus, group, gtype.type);
-
+	if (!best_type)
+		return iommu_def_domain_type;
+	return best_type;
 }
 
 static int iommu_group_do_probe_finalize(struct device *dev, void *data)
@@ -1881,7 +1874,8 @@ int bus_iommu_probe(struct bus_type *bus)
 		list_del_init(&group->entry);
 
 		/* Try to allocate default domain */
-		probe_alloc_default_domain(bus, group);
+		iommu_group_alloc_default_domain(
+			bus, group, iommu_get_default_domain_type(group, 0));
 
 		if (!group->default_domain) {
 			mutex_unlock(&group->mutex);
@@ -2900,27 +2894,15 @@ EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
 static int iommu_change_dev_def_domain(struct iommu_group *group,
 				       struct device *dev, int type)
 {
-	struct __group_domain_type gtype = {NULL, 0};
 	struct iommu_domain *prev_dom;
 	int ret;
 
 	lockdep_assert_held(&group->mutex);
 
 	prev_dom = group->default_domain;
-	__iommu_group_for_each_dev(group, &gtype,
-				   probe_get_default_domain_type);
-	if (!type) {
-		/*
-		 * If the user hasn't requested any specific type of domain and
-		 * if the device supports both the domains, then default to the
-		 * domain the device was booted with
-		 */
-		type = gtype.type ? : iommu_def_domain_type;
-	} else if (gtype.type && type != gtype.type) {
-		dev_err_ratelimited(dev, "Device cannot be in %s domain\n",
-				    iommu_domain_type_str(type));
+	type = iommu_get_default_domain_type(group, type);
+	if (type < 0)
 		return -EINVAL;
-	}
 
 	/*
 	 * Switch to a new domain only if the requested domain type is different
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (11 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-13  2:47   ` Baolu Lu
  2023-04-20  7:54   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function Jason Gunthorpe
                   ` (4 subsequent siblings)
  17 siblings, 2 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Robin points out that the fallback to guessing what domains the driver
supports should only happen if the driver doesn't return a preference from
its ops->def_domain_type().

Re-organize iommu_group_alloc_default_domain() so it internally uses
iommu_def_domain_type only during the fallback and makes it clearer how
the fallback sequence works.

Make iommu_group_alloc_default_domain() return the domain so the return
based logic is cleaner and to prepare for the next patch.

Remove the iommu_alloc_default_domain() function as it is now trivially
just calling iommu_group_alloc_default_domain().

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 71 ++++++++++++++++++++++---------------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index df82d81e92f24b..bbbc4c865dbecd 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -91,8 +91,9 @@ static const char * const iommu_group_resv_type_string[] = {
 
 static int iommu_bus_notifier(struct notifier_block *nb,
 			      unsigned long action, void *data);
-static int iommu_alloc_default_domain(struct iommu_group *group,
-				      struct device *dev);
+static struct iommu_domain *
+iommu_group_alloc_default_domain(struct iommu_group *group, int req_type);
+static int iommu_get_def_domain_type(struct device *dev);
 static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
 						 unsigned type);
 static int __iommu_attach_device(struct iommu_domain *domain,
@@ -429,7 +430,8 @@ int iommu_probe_device(struct device *dev)
 		 * support default domains, so the return value is not yet
 		 * checked.
 		 */
-		iommu_alloc_default_domain(group, dev);
+		group->default_domain = iommu_group_alloc_default_domain(
+			group, iommu_get_def_domain_type(dev));
 		if (group->default_domain) {
 			iommu_create_device_direct_mappings(group, dev);
 			ret = __iommu_group_set_domain(group,
@@ -1669,35 +1671,38 @@ static int iommu_get_def_domain_type(struct device *dev)
 	return 0;
 }
 
-static int iommu_group_alloc_default_domain(struct bus_type *bus,
-					    struct iommu_group *group,
-					    unsigned int type)
+/*
+ * req_type of 0 means "auto" which means to select a domain based on
+ * iommu_def_domain_type or what the driver actually supports.
+ */
+static struct iommu_domain *
+iommu_group_alloc_default_domain(struct iommu_group *group, int req_type)
 {
+	struct bus_type *bus =
+		list_first_entry(&group->devices, struct group_device, list)
+			->dev->bus;
 	struct iommu_domain *dom;
 
-	dom = __iommu_domain_alloc(bus, type);
-	if (!dom && type != IOMMU_DOMAIN_DMA) {
-		dom = __iommu_domain_alloc(bus, IOMMU_DOMAIN_DMA);
-		if (dom)
-			pr_warn("Failed to allocate default IOMMU domain of type %u for group %s - Falling back to IOMMU_DOMAIN_DMA",
-				type, group->name);
-	}
-
-	if (!dom)
-		return -ENOMEM;
+	lockdep_assert_held(&group->mutex);
 
-	group->default_domain = dom;
-	return 0;
-}
+	if (req_type)
+		return __iommu_domain_alloc(bus, req_type);
 
-static int iommu_alloc_default_domain(struct iommu_group *group,
-				      struct device *dev)
-{
-	unsigned int type;
+	/* The driver gave no guidance on what type to use, try the default */
+	dom = __iommu_domain_alloc(bus, iommu_def_domain_type);
+	if (dom)
+		return dom;
 
-	type = iommu_get_def_domain_type(dev) ? : iommu_def_domain_type;
+	/* Otherwise IDENTITY and DMA_FQ defaults will try DMA */
+	if (iommu_def_domain_type == IOMMU_DOMAIN_DMA)
+		return NULL;
+	dom = __iommu_domain_alloc(bus, IOMMU_DOMAIN_DMA);
+	if (!dom)
+		return NULL;
 
-	return iommu_group_alloc_default_domain(dev->bus, group, type);
+	pr_warn("Failed to allocate default IOMMU domain of type %u for group %s - Falling back to IOMMU_DOMAIN_DMA",
+		iommu_def_domain_type, group->name);
+	return dom;
 }
 
 /**
@@ -1809,15 +1814,12 @@ static int iommu_get_default_domain_type(struct iommu_group *group,
 				"Device needs domain type %s, but device %s in the same iommu group requires type %s - using default\n",
 				iommu_domain_type_str(type), dev_name(last_dev),
 				iommu_domain_type_str(best_type));
-			return iommu_def_domain_type;
+			return 0;
 		}
 		if (!best_type)
 			best_type = type;
 		last_dev = gdev->dev;
 	}
-
-	if (!best_type)
-		return iommu_def_domain_type;
 	return best_type;
 }
 
@@ -1874,9 +1876,8 @@ int bus_iommu_probe(struct bus_type *bus)
 		list_del_init(&group->entry);
 
 		/* Try to allocate default domain */
-		iommu_group_alloc_default_domain(
-			bus, group, iommu_get_default_domain_type(group, 0));
-
+		group->default_domain = iommu_group_alloc_default_domain(
+			group, iommu_get_default_domain_type(group, 0));
 		if (!group->default_domain) {
 			mutex_unlock(&group->mutex);
 			continue;
@@ -2915,9 +2916,11 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 	group->domain = NULL;
 
 	/* Sets group->default_domain to the newly allocated domain */
-	ret = iommu_group_alloc_default_domain(dev->bus, group, type);
-	if (ret)
+	group->default_domain = iommu_group_alloc_default_domain(group, type);
+	if (!group->default_domain) {
+		ret = -EINVAL;
 		goto restore_old_domain;
+	}
 
 	group->domain = prev_dom;
 	ret = iommu_create_device_direct_mappings(group, dev);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (12 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  7:57   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM Jason Gunthorpe
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Make iommu_change_dev_def_domain() general enough to setup the initial
default_domain or replace it with a new default_domain. Call the new
function iommu_setup_default_domain() and make it the only place in the
code that stores to group->default_domain.

Consolidate the three copies of the default_domain setup sequence. The flow
flow requires:

 - Determining the domain type to use
 - Checking if the current default domain is the same type
 - Allocating a domain
 - Doing iommu_create_device_direct_mappings()
 - Attaching it to devices
 - Store group->default_domain

This adjusts the domain allocation from the prior patch to be able to
detect if each of the allocation steps is already the domain we already
have, which is a more robust version of what change default domain was
already doing.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 202 +++++++++++++++++++-----------------------
 1 file changed, 89 insertions(+), 113 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index bbbc4c865dbecd..8744b15f0bd1f0 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -91,9 +91,6 @@ static const char * const iommu_group_resv_type_string[] = {
 
 static int iommu_bus_notifier(struct notifier_block *nb,
 			      unsigned long action, void *data);
-static struct iommu_domain *
-iommu_group_alloc_default_domain(struct iommu_group *group, int req_type);
-static int iommu_get_def_domain_type(struct device *dev);
 static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
 						 unsigned type);
 static int __iommu_attach_device(struct iommu_domain *domain,
@@ -124,7 +121,9 @@ static void __iommu_group_set_domain_nofail(struct iommu_group *group,
 		group, new_domain, IOMMU_SET_DOMAIN_MUST_SUCCEED));
 }
 
-static int iommu_create_device_direct_mappings(struct iommu_group *group,
+static int iommu_setup_default_domain(struct iommu_group *group,
+				      int target_type);
+static int iommu_create_device_direct_mappings(struct iommu_domain *domain,
 					       struct device *dev);
 static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
 static ssize_t iommu_group_store_type(struct iommu_group *group,
@@ -419,33 +418,18 @@ int iommu_probe_device(struct device *dev)
 
 	mutex_lock(&group->mutex);
 
-	iommu_create_device_direct_mappings(group, dev);
+	if (group->default_domain)
+		iommu_create_device_direct_mappings(group->default_domain, dev);
 
 	if (group->domain) {
 		ret = __iommu_device_set_domain(group, dev, group->domain, 0);
+		if (ret)
+			goto err_unlock;
 	} else if (!group->default_domain) {
-		/*
-		 * Try to allocate a default domain - needs support from the
-		 * IOMMU driver. There are still some drivers which don't
-		 * support default domains, so the return value is not yet
-		 * checked.
-		 */
-		group->default_domain = iommu_group_alloc_default_domain(
-			group, iommu_get_def_domain_type(dev));
-		if (group->default_domain) {
-			iommu_create_device_direct_mappings(group, dev);
-			ret = __iommu_group_set_domain(group,
-						       group->default_domain);
-		}
-
-		/*
-		 * We assume that the iommu driver starts up the device in
-		 * 'set_platform_dma_ops' mode if it does not support default
-		 * domains.
-		 */
+		ret = iommu_setup_default_domain(group, 0);
+		if (ret)
+			goto err_unlock;
 	}
-	if (ret)
-		goto err_unlock;
 
 	mutex_unlock(&group->mutex);
 	iommu_group_put(group);
@@ -991,16 +975,15 @@ int iommu_group_set_name(struct iommu_group *group, const char *name)
 }
 EXPORT_SYMBOL_GPL(iommu_group_set_name);
 
-static int iommu_create_device_direct_mappings(struct iommu_group *group,
+static int iommu_create_device_direct_mappings(struct iommu_domain *domain,
 					       struct device *dev)
 {
-	struct iommu_domain *domain = group->default_domain;
 	struct iommu_resv_region *entry;
 	struct list_head mappings;
 	unsigned long pg_size;
 	int ret = 0;
 
-	if (!domain || !iommu_is_dma_domain(domain))
+	if (!iommu_is_dma_domain(domain))
 		return 0;
 
 	BUG_ON(!domain->pgsize_bitmap);
@@ -1671,6 +1654,15 @@ static int iommu_get_def_domain_type(struct device *dev)
 	return 0;
 }
 
+static struct iommu_domain *
+__iommu_group_alloc_default_domain(struct bus_type *bus,
+				   struct iommu_group *group, int req_type)
+{
+	if (group->default_domain && group->default_domain->type == req_type)
+		return group->default_domain;
+	return __iommu_domain_alloc(bus, req_type);
+}
+
 /*
  * req_type of 0 means "auto" which means to select a domain based on
  * iommu_def_domain_type or what the driver actually supports.
@@ -1686,17 +1678,17 @@ iommu_group_alloc_default_domain(struct iommu_group *group, int req_type)
 	lockdep_assert_held(&group->mutex);
 
 	if (req_type)
-		return __iommu_domain_alloc(bus, req_type);
+		return __iommu_group_alloc_default_domain(bus, group, req_type);
 
 	/* The driver gave no guidance on what type to use, try the default */
-	dom = __iommu_domain_alloc(bus, iommu_def_domain_type);
+	dom = __iommu_group_alloc_default_domain(bus, group, iommu_def_domain_type);
 	if (dom)
 		return dom;
 
 	/* Otherwise IDENTITY and DMA_FQ defaults will try DMA */
 	if (iommu_def_domain_type == IOMMU_DOMAIN_DMA)
 		return NULL;
-	dom = __iommu_domain_alloc(bus, IOMMU_DOMAIN_DMA);
+	dom = __iommu_group_alloc_default_domain(bus, group, IOMMU_DOMAIN_DMA);
 	if (!dom)
 		return NULL;
 
@@ -1839,21 +1831,6 @@ static void __iommu_group_dma_finalize(struct iommu_group *group)
 				   iommu_group_do_probe_finalize);
 }
 
-static int iommu_do_create_direct_mappings(struct device *dev, void *data)
-{
-	struct iommu_group *group = data;
-
-	iommu_create_device_direct_mappings(group, dev);
-
-	return 0;
-}
-
-static int iommu_group_create_direct_mappings(struct iommu_group *group)
-{
-	return __iommu_group_for_each_dev(group, group,
-					  iommu_do_create_direct_mappings);
-}
-
 int bus_iommu_probe(struct bus_type *bus)
 {
 	struct iommu_group *group, *next;
@@ -1875,27 +1852,16 @@ int bus_iommu_probe(struct bus_type *bus)
 		/* Remove item from the list */
 		list_del_init(&group->entry);
 
-		/* Try to allocate default domain */
-		group->default_domain = iommu_group_alloc_default_domain(
-			group, iommu_get_default_domain_type(group, 0));
-		if (!group->default_domain) {
+		ret = iommu_setup_default_domain(group, 0);
+		if (ret) {
 			mutex_unlock(&group->mutex);
-			continue;
+			return ret;
 		}
-
-		iommu_group_create_direct_mappings(group);
-
-		ret = __iommu_group_set_domain(group, group->default_domain);
-
 		mutex_unlock(&group->mutex);
-
-		if (ret)
-			break;
-
 		__iommu_group_dma_finalize(group);
 	}
 
-	return ret;
+	return 0;
 }
 
 bool iommu_present(struct bus_type *bus)
@@ -2878,68 +2844,83 @@ int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
 }
 EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
 
-/*
- * Changes the default domain of an iommu group
- *
- * @group: The group for which the default domain should be changed
- * @dev: The first device in the group
- * @type: The type of the new default domain that gets associated with the group
- *
- * Returns 0 on success and error code on failure
+/**
+ * iommu_setup_default_domain - Set the default_domain for the group
+ * @group: Group to change
+ * @target_type: Domain type to set as the default_domain
  *
- * Note:
- * 1. Presently, this function is called only when user requests to change the
- *    group's default domain type through /sys/kernel/iommu_groups/<grp_id>/type
- *    Please take a closer look if intended to use for other purposes.
+ * Allocate a default domain and set it as the current domain on the group. If
+ * the group already has a default domain it will be changed to the target_type.
+ * When target_type is 0 the default domain is selected based on driver and
+ * system preferences.
  */
-static int iommu_change_dev_def_domain(struct iommu_group *group,
-				       struct device *dev, int type)
+static int iommu_setup_default_domain(struct iommu_group *group,
+				      int target_type)
 {
-	struct iommu_domain *prev_dom;
+	struct iommu_domain *old_dom = group->default_domain;
+	struct group_device *gdev;
+	struct iommu_domain *dom;
+	int req_type;
 	int ret;
 
 	lockdep_assert_held(&group->mutex);
 
-	prev_dom = group->default_domain;
-	type = iommu_get_default_domain_type(group, type);
-	if (type < 0)
+	req_type = iommu_get_default_domain_type(group, target_type);
+	if (req_type < 0)
 		return -EINVAL;
 
 	/*
-	 * Switch to a new domain only if the requested domain type is different
-	 * from the existing default domain type
+	 * There are still some drivers which don't support default domains, so
+	 * we ignore the failure and leave group->default_domain NULL.
+	 *
+	 * We assume that the iommu driver starts up the device in
+	 * 'set_platform_dma_ops' mode if it does not support default domains.
 	 */
-	if (prev_dom->type == type)
+	dom = iommu_group_alloc_default_domain(group, req_type);
+	if (!dom) {
+		/* Once in default_domain mode we never leave */
+		if (group->default_domain)
+			return -ENODEV;
+		group->default_domain = NULL;
 		return 0;
-
-	group->default_domain = NULL;
-	group->domain = NULL;
-
-	/* Sets group->default_domain to the newly allocated domain */
-	group->default_domain = iommu_group_alloc_default_domain(group, type);
-	if (!group->default_domain) {
-		ret = -EINVAL;
-		goto restore_old_domain;
 	}
 
-	group->domain = prev_dom;
-	ret = iommu_create_device_direct_mappings(group, dev);
-	if (ret)
-		goto free_new_domain;
-
-	ret = __iommu_group_set_domain(group, group->default_domain);
-	if (ret)
-		goto free_new_domain;
-
-	iommu_domain_free(prev_dom);
+	if (group->default_domain == dom)
+		return 0;
 
-	return 0;
+	/*
+	 * IOMMU_RESV_DIRECT and IOMMU_RESV_DIRECT_RELAXABLE regions must be
+	 * mapped before their device is attached, in order to guarantee
+	 * continuity with any FW activity
+	 */
+	for_each_group_device(group, gdev)
+		iommu_create_device_direct_mappings(dom, gdev->dev);
 
-free_new_domain:
-	iommu_domain_free(group->default_domain);
-restore_old_domain:
-	group->default_domain = prev_dom;
+	/* We must set default_domain early for __iommu_device_set_domain */
+	group->default_domain = dom;
+	if (!group->domain) {
+		/*
+		 * Drivers are not allowed to fail the first domain attach.
+		 * The only way to recover from this is to fail attaching the
+		 * iommu driver and call ops->release_device. Put the domain
+		 * in group->default_domain so it is freed after.
+		 */
+		ret = __iommu_group_set_domain_internal(
+			group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
+		if (WARN_ON(ret))
+			goto out_free;
+	} else {
+		ret = __iommu_group_set_domain(group, dom);
+		if (ret) {
+			iommu_domain_free(dom);
+			group->default_domain = old_dom;
+			return ret;
+		}
+	}
 
+out_free:
+	if (old_dom)
+		iommu_domain_free(old_dom);
 	return ret;
 }
 
@@ -2955,8 +2936,6 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 static ssize_t iommu_group_store_type(struct iommu_group *group,
 				      const char *buf, size_t count)
 {
-	struct group_device *grp_dev;
-	struct device *dev;
 	int ret, req_type;
 
 	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
@@ -2994,10 +2973,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 		return -EPERM;
 	}
 
-	grp_dev = list_first_entry(&group->devices, struct group_device, list);
-	dev = grp_dev->dev;
-
-	ret = iommu_change_dev_def_domain(group, dev, req_type);
+	ret = iommu_setup_default_domain(group, req_type);
 
 	/*
 	 * Release the mutex here because ops->probe_finalize() call-back of
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (13 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  8:00   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev() Jason Gunthorpe
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

For now several ARM drivers do not allow mappings to be created until a
domain is attached. This means they do not technically support
IOMMU_RESV_DIRECT as it requires the 1:1 maps to work continuously.

Currently if the platform requests these maps on ARM systems they are
silently ignored.

Work around this by trying again to establish the direct mappings after
the domain is attached if the pre-attach attempt failed.

In the long run the drivers will be fixed to fully setup domains when they
are created without waiting for attachment.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8744b15f0bd1f0..4c0908402f0139 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2860,6 +2860,7 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 	struct iommu_domain *old_dom = group->default_domain;
 	struct group_device *gdev;
 	struct iommu_domain *dom;
+	bool direct_failed;
 	int req_type;
 	int ret;
 
@@ -2893,8 +2894,11 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 	 * mapped before their device is attached, in order to guarantee
 	 * continuity with any FW activity
 	 */
-	for_each_group_device(group, gdev)
-		iommu_create_device_direct_mappings(dom, gdev->dev);
+	direct_failed = false;
+	for_each_group_device(group, gdev) {
+		if (iommu_create_device_direct_mappings(dom, gdev->dev))
+			direct_failed = true;
+	}
 
 	/* We must set default_domain early for __iommu_device_set_domain */
 	group->default_domain = dom;
@@ -2918,6 +2922,17 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 		}
 	}
 
+	/*
+	 * Drivers are supposed to allow mappings to be installed in a domain
+	 * before device attachment, but some don't. Hack around this defect by
+	 * trying again after attaching. If this happens it means the device
+	 * will not continuously have the IOMMU_RESV_DIRECT map.
+	 */
+	if (direct_failed) {
+		for_each_group_device(group, gdev)
+			iommu_create_device_direct_mappings(dom, gdev->dev);
+	}
+
 out_free:
 	if (old_dom)
 		iommu_domain_free(old_dom);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (14 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  8:06   ` Tian, Kevin
  2023-04-12 13:51 ` [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type() Jason Gunthorpe
  2023-05-01 22:01 ` [PATCH v4 00/17] Consolidate the error handling around device attachment Heiko Stübner
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

The last two users of it are quite trivial, just open code the one line
loop.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 53 ++++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 28 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 4c0908402f0139..85b1faac641267 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1134,20 +1134,6 @@ void iommu_group_remove_device(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(iommu_group_remove_device);
 
-static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,
-				      int (*fn)(struct device *, void *))
-{
-	struct group_device *device;
-	int ret = 0;
-
-	for_each_group_device(group, device) {
-		ret = fn(device->dev, data);
-		if (ret)
-			break;
-	}
-	return ret;
-}
-
 /**
  * iommu_group_for_each_dev - iterate over each device in the group
  * @group: the group
@@ -1162,10 +1148,15 @@ static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,
 int iommu_group_for_each_dev(struct iommu_group *group, void *data,
 			     int (*fn)(struct device *, void *))
 {
-	int ret;
+	struct group_device *device;
+	int ret = 0;
 
 	mutex_lock(&group->mutex);
-	ret = __iommu_group_for_each_dev(group, data, fn);
+	for_each_group_device(group, device) {
+		ret = fn(device->dev, data);
+		if (ret)
+			break;
+	}
 	mutex_unlock(&group->mutex);
 
 	return ret;
@@ -1815,20 +1806,12 @@ static int iommu_get_default_domain_type(struct iommu_group *group,
 	return best_type;
 }
 
-static int iommu_group_do_probe_finalize(struct device *dev, void *data)
+static void iommu_group_do_probe_finalize(struct device *dev)
 {
 	const struct iommu_ops *ops = dev_iommu_ops(dev);
 
 	if (ops->probe_finalize)
 		ops->probe_finalize(dev);
-
-	return 0;
-}
-
-static void __iommu_group_dma_finalize(struct iommu_group *group)
-{
-	__iommu_group_for_each_dev(group, group->default_domain,
-				   iommu_group_do_probe_finalize);
 }
 
 int bus_iommu_probe(struct bus_type *bus)
@@ -1847,6 +1830,8 @@ int bus_iommu_probe(struct bus_type *bus)
 		return ret;
 
 	list_for_each_entry_safe(group, next, &group_list, entry) {
+		struct group_device *gdev;
+
 		mutex_lock(&group->mutex);
 
 		/* Remove item from the list */
@@ -1858,7 +1843,15 @@ int bus_iommu_probe(struct bus_type *bus)
 			return ret;
 		}
 		mutex_unlock(&group->mutex);
-		__iommu_group_dma_finalize(group);
+
+		/*
+		 * Mis-locked because the ops->probe_finalize() call-back of
+		 * some IOMMU drivers calls arm_iommu_attach_device() which
+		 * in-turn might call back into IOMMU core code, where it tries
+		 * to take group->mutex, resulting in a deadlock.
+		 */
+		for_each_group_device(group, gdev)
+			iommu_group_do_probe_finalize(gdev->dev);
 	}
 
 	return 0;
@@ -2999,8 +2992,12 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	mutex_unlock(&group->mutex);
 
 	/* Make sure dma_ops is appropriatley set */
-	if (!ret)
-		__iommu_group_dma_finalize(group);
+	if (!ret) {
+		struct group_device *gdev;
+
+		for_each_group_device(group, gdev)
+			iommu_group_do_probe_finalize(gdev->dev);
+	}
 
 	return ret ?: count;
 }
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type()
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (15 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev() Jason Gunthorpe
@ 2023-04-12 13:51 ` Jason Gunthorpe
  2023-04-20  8:07   ` Tian, Kevin
  2023-05-01 22:01 ` [PATCH v4 00/17] Consolidate the error handling around device attachment Heiko Stübner
  17 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-12 13:51 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Use a normal "goto unwind" instead of trying to be clever with checking
!ret and manually managing the unlock.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 85b1faac641267..bd6ad6ba80aa6a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2944,6 +2944,7 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 static ssize_t iommu_group_store_type(struct iommu_group *group,
 				      const char *buf, size_t count)
 {
+	struct group_device *gdev;
 	int ret, req_type;
 
 	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
@@ -2968,20 +2969,23 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	if (req_type == IOMMU_DOMAIN_DMA_FQ &&
 	    group->default_domain->type == IOMMU_DOMAIN_DMA) {
 		ret = iommu_dma_init_fq(group->default_domain);
-		if (!ret)
-			group->default_domain->type = IOMMU_DOMAIN_DMA_FQ;
-		mutex_unlock(&group->mutex);
+		if (ret)
+			goto out_unlock;
 
-		return ret ?: count;
+		group->default_domain->type = IOMMU_DOMAIN_DMA_FQ;
+		ret = count;
+		goto out_unlock;
 	}
 
 	/* Otherwise, ensure that device exists and no driver is bound. */
 	if (list_empty(&group->devices) || group->owner_cnt) {
-		mutex_unlock(&group->mutex);
-		return -EPERM;
+		ret = -EPERM;
+		goto out_unlock;
 	}
 
 	ret = iommu_setup_default_domain(group, req_type);
+	if (ret)
+		goto out_unlock;
 
 	/*
 	 * Release the mutex here because ops->probe_finalize() call-back of
@@ -2992,13 +2996,12 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	mutex_unlock(&group->mutex);
 
 	/* Make sure dma_ops is appropriatley set */
-	if (!ret) {
-		struct group_device *gdev;
-
-		for_each_group_device(group, gdev)
-			iommu_group_do_probe_finalize(gdev->dev);
-	}
+	for_each_group_device(group, gdev)
+		iommu_group_do_probe_finalize(gdev->dev);
+	return count;
 
+out_unlock:
+	mutex_unlock(&group->mutex);
 	return ret ?: count;
 }
 
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain
  2023-04-12 13:51 ` [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain Jason Gunthorpe
@ 2023-04-13  2:35   ` Baolu Lu
  2023-04-20  7:25   ` Tian, Kevin
  1 sibling, 0 replies; 35+ messages in thread
From: Baolu Lu @ 2023-04-13  2:35 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Tom Rix,
	Will Deacon
  Cc: baolu.lu, Kevin Tian, Nicolin Chen

On 4/12/23 9:51 PM, Jason Gunthorpe wrote:
> Reorganize the attach_deferred logic to set dev->iommu->attach_deferred
> immediately during probe and then have __iommu_device_set_domain() check
> it and not attach the default_domain.
> 
> This is to prepare for removing the group->domain set from
> iommu_group_alloc_default_domain() by calling __iommu_group_set_domain()
> to set the group->domain.
> 
> Signed-off-by: Jason Gunthorpe<jgg@nvidia.com>

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
baolu

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain()
  2023-04-12 13:51 ` [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain() Jason Gunthorpe
@ 2023-04-13  2:47   ` Baolu Lu
  2023-04-20  7:54   ` Tian, Kevin
  1 sibling, 0 replies; 35+ messages in thread
From: Baolu Lu @ 2023-04-13  2:47 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Tom Rix,
	Will Deacon
  Cc: baolu.lu, Kevin Tian, Nicolin Chen

On 4/12/23 9:51 PM, Jason Gunthorpe wrote:
> Robin points out that the fallback to guessing what domains the driver
> supports should only happen if the driver doesn't return a preference from
> its ops->def_domain_type().
> 
> Re-organize iommu_group_alloc_default_domain() so it internally uses
> iommu_def_domain_type only during the fallback and makes it clearer how
> the fallback sequence works.
> 
> Make iommu_group_alloc_default_domain() return the domain so the return
> based logic is cleaner and to prepare for the next patch.
> 
> Remove the iommu_alloc_default_domain() function as it is now trivially
> just calling iommu_group_alloc_default_domain().
> 
> Signed-off-by: Jason Gunthorpe<jgg@nvidia.com>

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
baolu

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain
  2023-04-12 13:51 ` [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain Jason Gunthorpe
  2023-04-13  2:35   ` Baolu Lu
@ 2023-04-20  7:25   ` Tian, Kevin
  2023-04-20 12:59     ` Jason Gunthorpe
  1 sibling, 1 reply; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:25 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> @@ -2218,6 +2202,12 @@ static int __iommu_device_set_domain(struct
> iommu_group *group,
>  {
>  	int ret;
> 
> +	if (dev->iommu->attach_deferred) {
> +		if (new_domain == group->default_domain)
> +			return 0;
> +		dev->iommu->attach_deferred = 0;
> +	}
> +

This leads to a small semantics change, though I'm not sure whether it's
important to sustain.

Before this patch the 1st attach is ignored. Deferred attach is conducted
when the first DMA map request comes by calling iommu_deferred_attach().

But if the driver tries to do iommu_attach_group() or iommu_attach_device()
in between it will succeed. No deferred_attach check in that path.

Now with this patch the behavior is changed. If the driver tries to attach the
device to default domain in between it's ignored. Otherwise it will succeed.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device()
  2023-04-12 13:51 ` [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device() Jason Gunthorpe
@ 2023-04-20  7:26   ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:26 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> This function is only used to construct the groups, it should not be
> operating the iommu driver.
> 
> External callers in VFIO and POWER do not have any iommu drivers on the
> devices so group->domain will be NULL.
> 
> The only internal caller is from iommu_probe_device() which already calls
> iommu_group_do_dma_first_attach(), meaning we are calling it twice in the
> only case it matters.
> 
> Since iommu_probe_device() is the logical place to sort out the group's
> domain, remove the call from iommu_group_add_device().
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach
  2023-04-12 13:51 ` [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach Jason Gunthorpe
@ 2023-04-20  7:29   ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:29 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> The iommu_probe_device() path calls
> iommu_create_device_direct_mappings()
> after attaching the device.
> 
> IOMMU_RESV_DIRECT maps need to be continually in place, so if a
> hotplugged
> device has new ranges the should have been mapped into the default
> domain
> before it is attached.
> 
> Move the iommu_create_device_direct_mappings() call up.
> 
> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type
  2023-04-12 13:51 ` [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type Jason Gunthorpe
@ 2023-04-20  7:30   ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:30 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> Put all the code to calculate the default domain type into one
> function. Make the function able to handle the
> iommu_change_dev_def_domain() by taking in the target domain type and
> erroring out if the target type isn't reachable.
> 
> This makes it really clear that specifying a 0 type during
> iommu_change_dev_def_domain() will have the same outcome as the
> normal
> probe path.
> 
> Remove the obfuscating use of __iommu_group_for_each_dev() and related
> struct __group_domain_type.
> 
> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain()
  2023-04-12 13:51 ` [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain() Jason Gunthorpe
  2023-04-13  2:47   ` Baolu Lu
@ 2023-04-20  7:54   ` Tian, Kevin
  1 sibling, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:54 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> Robin points out that the fallback to guessing what domains the driver
> supports should only happen if the driver doesn't return a preference from
> its ops->def_domain_type().
> 
> Re-organize iommu_group_alloc_default_domain() so it internally uses
> iommu_def_domain_type only during the fallback and makes it clearer how
> the fallback sequence works.
> 
> Make iommu_group_alloc_default_domain() return the domain so the
> return
> based logic is cleaner and to prepare for the next patch.
> 
> Remove the iommu_alloc_default_domain() function as it is now trivially
> just calling iommu_group_alloc_default_domain().
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function
  2023-04-12 13:51 ` [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function Jason Gunthorpe
@ 2023-04-20  7:57   ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  7:57 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> Make iommu_change_dev_def_domain() general enough to setup the initial
> default_domain or replace it with a new default_domain. Call the new
> function iommu_setup_default_domain() and make it the only place in the
> code that stores to group->default_domain.
> 
> Consolidate the three copies of the default_domain setup sequence. The
> flow
> flow requires:
> 
>  - Determining the domain type to use
>  - Checking if the current default domain is the same type
>  - Allocating a domain
>  - Doing iommu_create_device_direct_mappings()
>  - Attaching it to devices
>  - Store group->default_domain
> 
> This adjusts the domain allocation from the prior patch to be able to
> detect if each of the allocation steps is already the domain we already
> have, which is a more robust version of what change default domain was
> already doing.
> 
> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM
  2023-04-12 13:51 ` [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM Jason Gunthorpe
@ 2023-04-20  8:00   ` Tian, Kevin
  2023-04-26 13:36     ` Jason Gunthorpe
  0 siblings, 1 reply; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  8:00 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
>
> @@ -2918,6 +2922,17 @@ static int iommu_setup_default_domain(struct
> iommu_group *group,
>  		}
>  	}
> 
> +	/*
> +	 * Drivers are supposed to allow mappings to be installed in a
> domain
> +	 * before device attachment, but some don't. Hack around this defect
> by
> +	 * trying again after attaching. If this happens it means the device
> +	 * will not continuously have the IOMMU_RESV_DIRECT map.
> +	 */
> +	if (direct_failed) {
> +		for_each_group_device(group, gdev)
> +			iommu_create_device_direct_mappings(dom, gdev-
> >dev);
> +	}
> +

Throw out a warning so user knows that the continuity might be broken
here?

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev()
  2023-04-12 13:51 ` [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev() Jason Gunthorpe
@ 2023-04-20  8:06   ` Tian, Kevin
  2023-04-20 13:01     ` Jason Gunthorpe
  0 siblings, 1 reply; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  8:06 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> @@ -1858,7 +1843,15 @@ int bus_iommu_probe(struct bus_type *bus)
>  			return ret;
>  		}
>  		mutex_unlock(&group->mutex);
> -		__iommu_group_dma_finalize(group);
> +
> +		/*
> +		 * Mis-locked because the ops->probe_finalize() call-back of

I guess this comment is trying to explain why the following code is
executed out of lock. In that case it reads clearer to me to start
with "Avoid mis-lock ..." and put it before mutex_unlock().

> +		 * some IOMMU drivers calls arm_iommu_attach_device()
> which
> +		 * in-turn might call back into IOMMU core code, where it
> tries
> +		 * to take group->mutex, resulting in a deadlock.
> +		 */
> +		for_each_group_device(group, gdev)
> +			iommu_group_do_probe_finalize(gdev->dev);
>  	}
> 
>  	return 0;
> @@ -2999,8 +2992,12 @@ static ssize_t iommu_group_store_type(struct
> iommu_group *group,
>  	mutex_unlock(&group->mutex);
> 
>  	/* Make sure dma_ops is appropriatley set */

while at it, s/appropriatley/appropriately/

> -	if (!ret)
> -		__iommu_group_dma_finalize(group);
> +	if (!ret) {
> +		struct group_device *gdev;
> +
> +		for_each_group_device(group, gdev)
> +			iommu_group_do_probe_finalize(gdev->dev);
> +	}
> 
>  	return ret ?: count;
>  }

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type()
  2023-04-12 13:51 ` [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type() Jason Gunthorpe
@ 2023-04-20  8:07   ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-20  8:07 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Joerg Roedel, llvm, Nathan Chancellor,
	Nick Desaulniers, Miguel Ojeda, Robin Murphy, Rix, Tom,
	Will Deacon
  Cc: Lu Baolu, Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 12, 2023 9:52 PM
> 
> Use a normal "goto unwind" instead of trying to be clever with checking
> !ret and manually managing the unlock.
> 
> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain
  2023-04-20  7:25   ` Tian, Kevin
@ 2023-04-20 12:59     ` Jason Gunthorpe
  0 siblings, 0 replies; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-20 12:59 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Rix, Tom, Will Deacon, Lu Baolu,
	Nicolin Chen

On Thu, Apr 20, 2023 at 07:25:59AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, April 12, 2023 9:52 PM
> > 
> > @@ -2218,6 +2202,12 @@ static int __iommu_device_set_domain(struct
> > iommu_group *group,
> >  {
> >  	int ret;
> > 
> > +	if (dev->iommu->attach_deferred) {
> > +		if (new_domain == group->default_domain)
> > +			return 0;
> > +		dev->iommu->attach_deferred = 0;
> > +	}
> > +
> 
> This leads to a small semantics change, though I'm not sure whether it's
> important to sustain.
> 
> Before this patch the 1st attach is ignored. Deferred attach is conducted
> when the first DMA map request comes by calling iommu_deferred_attach().
> 
> But if the driver tries to do iommu_attach_group() or iommu_attach_device()
> in between it will succeed. No deferred_attach check in that path.
> 
> Now with this patch the behavior is changed. If the driver tries to attach the
> device to default domain in between it's ignored. Otherwise it will succeed.

Sort of..

If a driver is using the attach APIs then the first successfull attach
to a non-default domain will clear the flag, then any following attach
even back to the default domain will succeed.

The tiny difference is if a driver tries to attach to the already
attached domain the NOP now extends to keeping the defered attach too.

I don't think any drivers do something like this so it seems OK.

v3 didn't have this as the flag kept the different call paths
seperate.

Jason

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev()
  2023-04-20  8:06   ` Tian, Kevin
@ 2023-04-20 13:01     ` Jason Gunthorpe
  2023-04-21  3:05       ` Tian, Kevin
  0 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-20 13:01 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Rix, Tom, Will Deacon, Lu Baolu,
	Nicolin Chen

On Thu, Apr 20, 2023 at 08:06:17AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, April 12, 2023 9:52 PM
> > 
> > @@ -1858,7 +1843,15 @@ int bus_iommu_probe(struct bus_type *bus)
> >  			return ret;
> >  		}
> >  		mutex_unlock(&group->mutex);
> > -		__iommu_group_dma_finalize(group);
> > +
> > +		/*
> > +		 * Mis-locked because the ops->probe_finalize() call-back of
> 
> I guess this comment is trying to explain why the following code is
> executed out of lock. In that case it reads clearer to me to start
> with "Avoid mis-lock ..." and put it before mutex_unlock().

Iit is not avoiding anything, this is deliberately locked wrong.

We are iterating over for_each_group_device() without holding the
required group->mutex and hope&pray nothing bad happens. We do this
because of some bad ARM32 driver implementations of
ops->probe_finalize that recurse into the lock.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev()
  2023-04-20 13:01     ` Jason Gunthorpe
@ 2023-04-21  3:05       ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-21  3:05 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Rix, Tom, Will Deacon, Lu Baolu,
	Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, April 20, 2023 9:01 PM
> 
> On Thu, Apr 20, 2023 at 08:06:17AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Wednesday, April 12, 2023 9:52 PM
> > >
> > > @@ -1858,7 +1843,15 @@ int bus_iommu_probe(struct bus_type *bus)
> > >  			return ret;
> > >  		}
> > >  		mutex_unlock(&group->mutex);
> > > -		__iommu_group_dma_finalize(group);
> > > +
> > > +		/*
> > > +		 * Mis-locked because the ops->probe_finalize() call-back of
> >
> > I guess this comment is trying to explain why the following code is
> > executed out of lock. In that case it reads clearer to me to start
> > with "Avoid mis-lock ..." and put it before mutex_unlock().
> 
> Iit is not avoiding anything, this is deliberately locked wrong.
> 
> We are iterating over for_each_group_device() without holding the
> required group->mutex and hope&pray nothing bad happens. We do this
> because of some bad ARM32 driver implementations of
> ops->probe_finalize that recurse into the lock.
> 

Then add a FIXME tag there?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM
  2023-04-20  8:00   ` Tian, Kevin
@ 2023-04-26 13:36     ` Jason Gunthorpe
  2023-04-26 23:48       ` Tian, Kevin
  0 siblings, 1 reply; 35+ messages in thread
From: Jason Gunthorpe @ 2023-04-26 13:36 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Rix, Tom, Will Deacon, Lu Baolu,
	Nicolin Chen

On Thu, Apr 20, 2023 at 08:00:28AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, April 12, 2023 9:52 PM
> >
> > @@ -2918,6 +2922,17 @@ static int iommu_setup_default_domain(struct
> > iommu_group *group,
> >  		}
> >  	}
> > 
> > +	/*
> > +	 * Drivers are supposed to allow mappings to be installed in a
> > domain
> > +	 * before device attachment, but some don't. Hack around this defect
> > by
> > +	 * trying again after attaching. If this happens it means the device
> > +	 * will not continuously have the IOMMU_RESV_DIRECT map.
> > +	 */
> > +	if (direct_failed) {
> > +		for_each_group_device(group, gdev)
> > +			iommu_create_device_direct_mappings(dom, gdev-
> > >dev);
> > +	}
> > +
> 
> Throw out a warning so user knows that the continuity might be broken
> here?

I suppose we should also fail the whole function if we hit failures
here?

So like this:

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index f44e2c21e9d05e..7a3b249601ea49 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2873,8 +2873,12 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 	 */
 	direct_failed = false;
 	for_each_group_device(group, gdev) {
-		if (iommu_create_device_direct_mappings(dom, gdev->dev))
+		if (iommu_create_device_direct_mappings(dom, gdev->dev)) {
 			direct_failed = true;
+			dev_warn_once(
+				gdev->dev->iommu->iommu_dev->dev,
+				"IOMMU driver was not able to establish FW requested direct mapping.");
+		}
 	}
 
 	/* We must set default_domain early for __iommu_device_set_domain */
@@ -2906,10 +2910,20 @@ static int iommu_setup_default_domain(struct iommu_group *group,
 	 * will not continuously have the IOMMU_RESV_DIRECT map.
 	 */
 	if (direct_failed) {
-		for_each_group_device(group, gdev)
-			iommu_create_device_direct_mappings(dom, gdev->dev);
+		for_each_group_device(group, gdev) {
+			ret = iommu_create_device_direct_mappings(dom, gdev->dev);
+			if (ret)
+				goto err_restore;
+		}
 	}
 
+err_restore:
+	if (old_dom) {
+		__iommu_group_set_domain_internal(
+			group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
+		iommu_domain_free(dom);
+		old_dom = NULL;
+	}
 out_free:
 	if (old_dom)
 		iommu_domain_free(old_dom);

Thanks,
Jason

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* RE: [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM
  2023-04-26 13:36     ` Jason Gunthorpe
@ 2023-04-26 23:48       ` Tian, Kevin
  0 siblings, 0 replies; 35+ messages in thread
From: Tian, Kevin @ 2023-04-26 23:48 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Rix, Tom, Will Deacon, Lu Baolu,
	Nicolin Chen

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 26, 2023 9:36 PM
> 
> On Thu, Apr 20, 2023 at 08:00:28AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Wednesday, April 12, 2023 9:52 PM
> > >
> > > @@ -2918,6 +2922,17 @@ static int
> iommu_setup_default_domain(struct
> > > iommu_group *group,
> > >  		}
> > >  	}
> > >
> > > +	/*
> > > +	 * Drivers are supposed to allow mappings to be installed in a
> > > domain
> > > +	 * before device attachment, but some don't. Hack around this defect
> > > by
> > > +	 * trying again after attaching. If this happens it means the device
> > > +	 * will not continuously have the IOMMU_RESV_DIRECT map.
> > > +	 */
> > > +	if (direct_failed) {
> > > +		for_each_group_device(group, gdev)
> > > +			iommu_create_device_direct_mappings(dom, gdev-
> > > >dev);
> > > +	}
> > > +
> >
> > Throw out a warning so user knows that the continuity might be broken
> > here?
> 
> I suppose we should also fail the whole function if we hit failures
> here?

agree

> 
> So like this:
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index f44e2c21e9d05e..7a3b249601ea49 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2873,8 +2873,12 @@ static int iommu_setup_default_domain(struct
> iommu_group *group,
>  	 */
>  	direct_failed = false;
>  	for_each_group_device(group, gdev) {
> -		if (iommu_create_device_direct_mappings(dom, gdev->dev))
> +		if (iommu_create_device_direct_mappings(dom, gdev->dev))
> {
>  			direct_failed = true;
> +			dev_warn_once(
> +				gdev->dev->iommu->iommu_dev->dev,
> +				"IOMMU driver was not able to establish FW
> requested direct mapping.");
> +		}
>  	}
> 
>  	/* We must set default_domain early for
> __iommu_device_set_domain */
> @@ -2906,10 +2910,20 @@ static int iommu_setup_default_domain(struct
> iommu_group *group,
>  	 * will not continuously have the IOMMU_RESV_DIRECT map.
>  	 */
>  	if (direct_failed) {
> -		for_each_group_device(group, gdev)
> -			iommu_create_device_direct_mappings(dom, gdev-
> >dev);
> +		for_each_group_device(group, gdev) {
> +			ret = iommu_create_device_direct_mappings(dom,
> gdev->dev);
> +			if (ret)
> +				goto err_restore;
> +		}
>  	}
> 
> +err_restore:
> +	if (old_dom) {
> +		__iommu_group_set_domain_internal(
> +			group, old_dom,
> IOMMU_SET_DOMAIN_MUST_SUCCEED);
> +		iommu_domain_free(dom);
> +		old_dom = NULL;
> +	}
>  out_free:
>  	if (old_dom)
>  		iommu_domain_free(old_dom);
> 
> Thanks,
> Jason


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v4 00/17] Consolidate the error handling around device attachment
  2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
                   ` (16 preceding siblings ...)
  2023-04-12 13:51 ` [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type() Jason Gunthorpe
@ 2023-05-01 22:01 ` Heiko Stübner
  17 siblings, 0 replies; 35+ messages in thread
From: Heiko Stübner @ 2023-05-01 22:01 UTC (permalink / raw)
  To: iommu, Joerg Roedel, llvm, Nathan Chancellor, Nick Desaulniers,
	Miguel Ojeda, Robin Murphy, Tom Rix, Will Deacon,
	Jason Gunthorpe
  Cc: Lu Baolu, Kevin Tian, Nicolin Chen

Am Mittwoch, 12. April 2023, 15:51:30 CEST schrieb Jason Gunthorpe:
> Jason Gunthorpe (17):
>   iommu: Replace iommu_group_device_count() with list_count_nodes()
>   iommu: Add for_each_group_device()
>   iommu: Make __iommu_group_set_domain() handle error unwind
>   iommu: Use __iommu_group_set_domain() for __iommu_attach_group()
>   iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain()
>   iommu: Replace __iommu_group_dma_first_attach() with set_domain
>   iommu: Remove iommu_group_do_dma_first_attach() from
>     iommu_group_add_device()
>   iommu: Replace iommu_group_do_dma_first_attach with
>     __iommu_device_set_domain
>   iommu: Fix iommu_probe_device() to attach the right domain
>   iommu: Do iommu_group_create_direct_mappings() before attach
>   iommu: Remove the assignment of group->domain during default domain
>     alloc
>   iommu: Consolidate the code to calculate the target default domain
>     type
>   iommu: Revise iommu_group_alloc_default_domain()
>   iommu: Consolidate the default_domain setup to one function
>   iommu: Allow IOMMU_RESV_DIRECT to work on ARM
>   iommu: Remove __iommu_group_for_each_dev()
>   iommu: Tidy the control flow in iommu_group_store_type()

This probably will need an update for Greg's recent
b18d0a0f92a8 ("iommu: make the pointer to struct bus_type constant")

I've fixed that locally for me for testing, and with that:

On both arm32+arm64 Rockchip boards (rk3288-pinky + px30-minievb)
the display keeps working, so

Tested-by: Heiko Stuebner <heiko@sntech.de>



^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2023-05-01 22:01 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-12 13:51 [PATCH v4 00/17] Consolidate the error handling around device attachment Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 01/17] iommu: Replace iommu_group_device_count() with list_count_nodes() Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 02/17] iommu: Add for_each_group_device() Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 03/17] iommu: Make __iommu_group_set_domain() handle error unwind Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 04/17] iommu: Use __iommu_group_set_domain() for __iommu_attach_group() Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 05/17] iommu: Use __iommu_group_set_domain() in iommu_change_dev_def_domain() Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 06/17] iommu: Replace __iommu_group_dma_first_attach() with set_domain Jason Gunthorpe
2023-04-13  2:35   ` Baolu Lu
2023-04-20  7:25   ` Tian, Kevin
2023-04-20 12:59     ` Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 07/17] iommu: Remove iommu_group_do_dma_first_attach() from iommu_group_add_device() Jason Gunthorpe
2023-04-20  7:26   ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 08/17] iommu: Replace iommu_group_do_dma_first_attach with __iommu_device_set_domain Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 09/17] iommu: Fix iommu_probe_device() to attach the right domain Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 10/17] iommu: Do iommu_group_create_direct_mappings() before attach Jason Gunthorpe
2023-04-20  7:29   ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 11/17] iommu: Remove the assignment of group->domain during default domain alloc Jason Gunthorpe
2023-04-12 13:51 ` [PATCH v4 12/17] iommu: Consolidate the code to calculate the target default domain type Jason Gunthorpe
2023-04-20  7:30   ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 13/17] iommu: Revise iommu_group_alloc_default_domain() Jason Gunthorpe
2023-04-13  2:47   ` Baolu Lu
2023-04-20  7:54   ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 14/17] iommu: Consolidate the default_domain setup to one function Jason Gunthorpe
2023-04-20  7:57   ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 15/17] iommu: Allow IOMMU_RESV_DIRECT to work on ARM Jason Gunthorpe
2023-04-20  8:00   ` Tian, Kevin
2023-04-26 13:36     ` Jason Gunthorpe
2023-04-26 23:48       ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 16/17] iommu: Remove __iommu_group_for_each_dev() Jason Gunthorpe
2023-04-20  8:06   ` Tian, Kevin
2023-04-20 13:01     ` Jason Gunthorpe
2023-04-21  3:05       ` Tian, Kevin
2023-04-12 13:51 ` [PATCH v4 17/17] iommu: Tidy the control flow in iommu_group_store_type() Jason Gunthorpe
2023-04-20  8:07   ` Tian, Kevin
2023-05-01 22:01 ` [PATCH v4 00/17] Consolidate the error handling around device attachment Heiko Stübner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).