All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/6] iommu: Allow IOVA rcache range be configured
@ 2021-07-14 10:36 ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

For streaming DMA mappings involving an IOMMU and whose IOVA len regularly
exceeds the IOVA rcache upper limit (meaning that they are not cached),
performance can be reduced. 

This may be much more pronounced from commit 4e89dce72521 ("iommu/iova:
Retry from last rb tree node if iova search fails"), as discussed at [0].

IOVAs which cannot be cached are highly involved in the IOVA ageing issue,
as discussed at [1].

This series allows the IOVA rcache range be configured, so that we may
cache all IOVAs per domain, thus improving performance.

A new IOMMU group sysfs file is added - max_opt_dma_size - which is used
indirectly to configure the IOVA rcache range:
/sys/kernel/iommu_groups/X/max_opt_dma_size

This file is updated same as how the IOMMU group default domain type is
updated, i.e. must unbind the only device in the group first.

The inspiration here comes from block layer request queue sysfs
"optimal_io_size" file, in /sys/block/sdX/queue/optimal_io_size

Some figures for storage scenario (when increasing IOVA rcache range to
cover all DMA mapping sizes from the LLD):
v5.13-rc1 baseline:			1200K IOPS
With series:				1800K IOPS

All above are for IOMMU strict mode. Non-strict mode gives ~1800K IOPS in
all scenarios.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
[1] https://lore.kernel.org/linux-iommu/1607538189-237944-1-git-send-email-john.garry@huawei.com/

Note that I cc'ed maintainers/reviewers only for the changes associated
with patch #5 since it just touches their code in only a minor way.

John Garry (6):
  iommu: Refactor iommu_group_store_type()
  iova: Allow rcache range upper limit to be flexible
  iommu: Allow iommu_change_dev_def_domain() realloc default domain for
    same type
  iommu: Allow max opt DMA len be set for a group via sysfs
  iova: Add iova_len argument to init_iova_domain()
  dma-iommu: Pass iova len for IOVA domain init

 .../ABI/testing/sysfs-kernel-iommu_groups     |  16 ++
 drivers/gpu/drm/tegra/drm.c                   |   2 +-
 drivers/gpu/host1x/dev.c                      |   2 +-
 drivers/iommu/dma-iommu.c                     |  15 +-
 drivers/iommu/iommu.c                         | 172 ++++++++++++------
 drivers/iommu/iova.c                          |  39 +++-
 drivers/staging/media/ipu3/ipu3-dmamap.c      |   2 +-
 drivers/staging/media/tegra-vde/iommu.c       |   2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c              |   2 +-
 include/linux/iommu.h                         |   6 +
 include/linux/iova.h                          |   9 +-
 11 files changed, 194 insertions(+), 73 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v4 0/6] iommu: Allow IOVA rcache range be configured
@ 2021-07-14 10:36 ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

For streaming DMA mappings involving an IOMMU and whose IOVA len regularly
exceeds the IOVA rcache upper limit (meaning that they are not cached),
performance can be reduced. 

This may be much more pronounced from commit 4e89dce72521 ("iommu/iova:
Retry from last rb tree node if iova search fails"), as discussed at [0].

IOVAs which cannot be cached are highly involved in the IOVA ageing issue,
as discussed at [1].

This series allows the IOVA rcache range be configured, so that we may
cache all IOVAs per domain, thus improving performance.

A new IOMMU group sysfs file is added - max_opt_dma_size - which is used
indirectly to configure the IOVA rcache range:
/sys/kernel/iommu_groups/X/max_opt_dma_size

This file is updated same as how the IOMMU group default domain type is
updated, i.e. must unbind the only device in the group first.

The inspiration here comes from block layer request queue sysfs
"optimal_io_size" file, in /sys/block/sdX/queue/optimal_io_size

Some figures for storage scenario (when increasing IOVA rcache range to
cover all DMA mapping sizes from the LLD):
v5.13-rc1 baseline:			1200K IOPS
With series:				1800K IOPS

All above are for IOMMU strict mode. Non-strict mode gives ~1800K IOPS in
all scenarios.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
[1] https://lore.kernel.org/linux-iommu/1607538189-237944-1-git-send-email-john.garry@huawei.com/

Note that I cc'ed maintainers/reviewers only for the changes associated
with patch #5 since it just touches their code in only a minor way.

John Garry (6):
  iommu: Refactor iommu_group_store_type()
  iova: Allow rcache range upper limit to be flexible
  iommu: Allow iommu_change_dev_def_domain() realloc default domain for
    same type
  iommu: Allow max opt DMA len be set for a group via sysfs
  iova: Add iova_len argument to init_iova_domain()
  dma-iommu: Pass iova len for IOVA domain init

 .../ABI/testing/sysfs-kernel-iommu_groups     |  16 ++
 drivers/gpu/drm/tegra/drm.c                   |   2 +-
 drivers/gpu/host1x/dev.c                      |   2 +-
 drivers/iommu/dma-iommu.c                     |  15 +-
 drivers/iommu/iommu.c                         | 172 ++++++++++++------
 drivers/iommu/iova.c                          |  39 +++-
 drivers/staging/media/ipu3/ipu3-dmamap.c      |   2 +-
 drivers/staging/media/tegra-vde/iommu.c       |   2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c              |   2 +-
 include/linux/iommu.h                         |   6 +
 include/linux/iova.h                          |   9 +-
 11 files changed, 194 insertions(+), 73 deletions(-)

-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v4 1/6] iommu: Refactor iommu_group_store_type()
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Function iommu_group_store_type() supports changing the default domain
of an IOMMU group.

Many conditions need to be satisfied and steps taken for this action to be
successful.

Satisfying these conditions and steps will be required for setting other
IOMMU group attributes, so factor into a common part and a part specific
to update the IOMMU group attribute.

No functional change intended.

Some code comments are tidied up also.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/iommu.c | 73 +++++++++++++++++++++++++++----------------
 1 file changed, 46 insertions(+), 27 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 5419c4b9f27a..8a815ac261f0 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3166,20 +3166,23 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 }
 
 /*
- * Changing the default domain through sysfs requires the users to ubind the
- * drivers from the devices in the iommu group. Return failure if this doesn't
- * meet.
+ * Changing the default domain or any other IOMMU group attribute through sysfs
+ * requires the users to unbind the drivers from the devices in the IOMMU group.
+ * Return failure if this precondition is not met.
  *
  * We need to consider the race between this and the device release path.
  * device_lock(dev) is used here to guarantee that the device release path
  * will not be entered at the same time.
  */
-static ssize_t iommu_group_store_type(struct iommu_group *group,
-				      const char *buf, size_t count)
+static ssize_t iommu_group_store_common(struct iommu_group *group,
+					const char *buf, size_t count,
+					int (*cb)(const char *buf,
+						  struct iommu_group *group,
+						  struct device *dev))
 {
 	struct group_device *grp_dev;
 	struct device *dev;
-	int ret, req_type;
+	int ret;
 
 	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
 		return -EACCES;
@@ -3187,25 +3190,16 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	if (WARN_ON(!group))
 		return -EINVAL;
 
-	if (sysfs_streq(buf, "identity"))
-		req_type = IOMMU_DOMAIN_IDENTITY;
-	else if (sysfs_streq(buf, "DMA"))
-		req_type = IOMMU_DOMAIN_DMA;
-	else if (sysfs_streq(buf, "auto"))
-		req_type = 0;
-	else
-		return -EINVAL;
-
 	/*
 	 * Lock/Unlock the group mutex here before device lock to
-	 * 1. Make sure that the iommu group has only one device (this is a
+	 * 1. Make sure that the IOMMU group has only one device (this is a
 	 *    prerequisite for step 2)
 	 * 2. Get struct *dev which is needed to lock device
 	 */
 	mutex_lock(&group->mutex);
 	if (iommu_group_device_count(group) != 1) {
 		mutex_unlock(&group->mutex);
-		pr_err_ratelimited("Cannot change default domain: Group has more than one device\n");
+		pr_err_ratelimited("Cannot change IOMMU group default domain attribute: Group has more than one device\n");
 		return -EINVAL;
 	}
 
@@ -3217,16 +3211,16 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	/*
 	 * Don't hold the group mutex because taking group mutex first and then
 	 * the device lock could potentially cause a deadlock as below. Assume
-	 * two threads T1 and T2. T1 is trying to change default domain of an
-	 * iommu group and T2 is trying to hot unplug a device or release [1] VF
-	 * of a PCIe device which is in the same iommu group. T1 takes group
-	 * mutex and before it could take device lock assume T2 has taken device
-	 * lock and is yet to take group mutex. Now, both the threads will be
-	 * waiting for the other thread to release lock. Below, lock order was
-	 * suggested.
+	 * two threads, T1 and T2. T1 is trying to change default domain
+	 * attribute of an IOMMU group and T2 is trying to hot unplug a device
+	 * or release [1] VF of a PCIe device which is in the same IOMMU group.
+	 * T1 takes the group mutex and before it could take device lock T2 may
+	 * have taken device lock and is yet to take group mutex. Now, both the
+	 * threads will be waiting for the other thread to release lock. Below,
+	 * lock order was suggested.
 	 * device_lock(dev);
 	 *	mutex_lock(&group->mutex);
-	 *		iommu_change_dev_def_domain();
+	 *		cb->iommu_change_dev_def_domain(); [example cb]
 	 *	mutex_unlock(&group->mutex);
 	 * device_unlock(dev);
 	 *
@@ -3240,7 +3234,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	 */
 	mutex_unlock(&group->mutex);
 
-	/* Check if the device in the group still has a driver bound to it */
+	/* Check if the only device in the group still has a driver bound */
 	device_lock(dev);
 	if (device_is_bound(dev)) {
 		pr_err_ratelimited("Device is still bound to driver\n");
@@ -3248,7 +3242,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 		goto out;
 	}
 
-	ret = iommu_change_dev_def_domain(group, dev, req_type);
+	ret = (cb)(buf, group, dev);
 	ret = ret ?: count;
 
 out:
@@ -3257,3 +3251,28 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 
 	return ret;
 }
+
+static int iommu_group_store_type_cb(const char *buf,
+				     struct iommu_group *group,
+				     struct device *dev)
+{
+	int type;
+
+	if (sysfs_streq(buf, "identity"))
+		type = IOMMU_DOMAIN_IDENTITY;
+	else if (sysfs_streq(buf, "DMA"))
+		type = IOMMU_DOMAIN_DMA;
+	else if (sysfs_streq(buf, "auto"))
+		type = 0;
+	else
+		return -EINVAL;
+
+	return iommu_change_dev_def_domain(group, dev, type);
+}
+
+static ssize_t iommu_group_store_type(struct iommu_group *group,
+				      const char *buf, size_t count)
+{
+	return iommu_group_store_common(group, buf, count,
+					iommu_group_store_type_cb);
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 1/6] iommu: Refactor iommu_group_store_type()
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Function iommu_group_store_type() supports changing the default domain
of an IOMMU group.

Many conditions need to be satisfied and steps taken for this action to be
successful.

Satisfying these conditions and steps will be required for setting other
IOMMU group attributes, so factor into a common part and a part specific
to update the IOMMU group attribute.

No functional change intended.

Some code comments are tidied up also.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/iommu.c | 73 +++++++++++++++++++++++++++----------------
 1 file changed, 46 insertions(+), 27 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 5419c4b9f27a..8a815ac261f0 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3166,20 +3166,23 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 }
 
 /*
- * Changing the default domain through sysfs requires the users to ubind the
- * drivers from the devices in the iommu group. Return failure if this doesn't
- * meet.
+ * Changing the default domain or any other IOMMU group attribute through sysfs
+ * requires the users to unbind the drivers from the devices in the IOMMU group.
+ * Return failure if this precondition is not met.
  *
  * We need to consider the race between this and the device release path.
  * device_lock(dev) is used here to guarantee that the device release path
  * will not be entered at the same time.
  */
-static ssize_t iommu_group_store_type(struct iommu_group *group,
-				      const char *buf, size_t count)
+static ssize_t iommu_group_store_common(struct iommu_group *group,
+					const char *buf, size_t count,
+					int (*cb)(const char *buf,
+						  struct iommu_group *group,
+						  struct device *dev))
 {
 	struct group_device *grp_dev;
 	struct device *dev;
-	int ret, req_type;
+	int ret;
 
 	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
 		return -EACCES;
@@ -3187,25 +3190,16 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	if (WARN_ON(!group))
 		return -EINVAL;
 
-	if (sysfs_streq(buf, "identity"))
-		req_type = IOMMU_DOMAIN_IDENTITY;
-	else if (sysfs_streq(buf, "DMA"))
-		req_type = IOMMU_DOMAIN_DMA;
-	else if (sysfs_streq(buf, "auto"))
-		req_type = 0;
-	else
-		return -EINVAL;
-
 	/*
 	 * Lock/Unlock the group mutex here before device lock to
-	 * 1. Make sure that the iommu group has only one device (this is a
+	 * 1. Make sure that the IOMMU group has only one device (this is a
 	 *    prerequisite for step 2)
 	 * 2. Get struct *dev which is needed to lock device
 	 */
 	mutex_lock(&group->mutex);
 	if (iommu_group_device_count(group) != 1) {
 		mutex_unlock(&group->mutex);
-		pr_err_ratelimited("Cannot change default domain: Group has more than one device\n");
+		pr_err_ratelimited("Cannot change IOMMU group default domain attribute: Group has more than one device\n");
 		return -EINVAL;
 	}
 
@@ -3217,16 +3211,16 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	/*
 	 * Don't hold the group mutex because taking group mutex first and then
 	 * the device lock could potentially cause a deadlock as below. Assume
-	 * two threads T1 and T2. T1 is trying to change default domain of an
-	 * iommu group and T2 is trying to hot unplug a device or release [1] VF
-	 * of a PCIe device which is in the same iommu group. T1 takes group
-	 * mutex and before it could take device lock assume T2 has taken device
-	 * lock and is yet to take group mutex. Now, both the threads will be
-	 * waiting for the other thread to release lock. Below, lock order was
-	 * suggested.
+	 * two threads, T1 and T2. T1 is trying to change default domain
+	 * attribute of an IOMMU group and T2 is trying to hot unplug a device
+	 * or release [1] VF of a PCIe device which is in the same IOMMU group.
+	 * T1 takes the group mutex and before it could take device lock T2 may
+	 * have taken device lock and is yet to take group mutex. Now, both the
+	 * threads will be waiting for the other thread to release lock. Below,
+	 * lock order was suggested.
 	 * device_lock(dev);
 	 *	mutex_lock(&group->mutex);
-	 *		iommu_change_dev_def_domain();
+	 *		cb->iommu_change_dev_def_domain(); [example cb]
 	 *	mutex_unlock(&group->mutex);
 	 * device_unlock(dev);
 	 *
@@ -3240,7 +3234,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	 */
 	mutex_unlock(&group->mutex);
 
-	/* Check if the device in the group still has a driver bound to it */
+	/* Check if the only device in the group still has a driver bound */
 	device_lock(dev);
 	if (device_is_bound(dev)) {
 		pr_err_ratelimited("Device is still bound to driver\n");
@@ -3248,7 +3242,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 		goto out;
 	}
 
-	ret = iommu_change_dev_def_domain(group, dev, req_type);
+	ret = (cb)(buf, group, dev);
 	ret = ret ?: count;
 
 out:
@@ -3257,3 +3251,28 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 
 	return ret;
 }
+
+static int iommu_group_store_type_cb(const char *buf,
+				     struct iommu_group *group,
+				     struct device *dev)
+{
+	int type;
+
+	if (sysfs_streq(buf, "identity"))
+		type = IOMMU_DOMAIN_IDENTITY;
+	else if (sysfs_streq(buf, "DMA"))
+		type = IOMMU_DOMAIN_DMA;
+	else if (sysfs_streq(buf, "auto"))
+		type = 0;
+	else
+		return -EINVAL;
+
+	return iommu_change_dev_def_domain(group, dev, type);
+}
+
+static ssize_t iommu_group_store_type(struct iommu_group *group,
+				      const char *buf, size_t count)
+{
+	return iommu_group_store_common(group, buf, count,
+					iommu_group_store_type_cb);
+}
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Some LLDs may request DMA mappings whose IOVA length exceeds that of the
current rcache upper limit.

This means that allocations for those IOVAs will never be cached, and
always must be allocated and freed from the RB tree per DMA mapping cycle.
This has a significant effect on performance, more so since commit
4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
fails"), as discussed at [0].

As a first step towards allowing the rcache range upper limit be
configured, hold this value in the IOVA rcache structure, and allocate
the rcaches separately.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/dma-iommu.c |  2 +-
 drivers/iommu/iova.c      | 23 +++++++++++++++++------
 include/linux/iova.h      |  4 ++--
 3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 98ba927aee1a..4772278aa5da 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
 	 * rounding up anything cacheable to make sure that can't happen. The
 	 * order of the unadjusted size will still match upon freeing.
 	 */
-	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
+	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
 		iova_len = roundup_pow_of_two(iova_len);
 
 	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index b6cf5f16123b..07ce73fdd8c1 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -15,6 +15,8 @@
 /* The anchor node sits above the top of the usable address space */
 #define IOVA_ANCHOR	~0UL
 
+#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */
+
 static bool iova_rcache_insert(struct iova_domain *iovad,
 			       unsigned long pfn,
 			       unsigned long size);
@@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
 	unsigned int cpu;
 	int i;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
+
+	iovad->rcaches = kcalloc(iovad->rcache_max_size,
+				 sizeof(*iovad->rcaches), GFP_KERNEL);
+	if (!iovad->rcaches)
+		return;
+
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		spin_lock_init(&rcache->lock);
 		rcache->depot_size = 0;
@@ -956,7 +965,7 @@ static bool iova_rcache_insert(struct iova_domain *iovad, unsigned long pfn,
 {
 	unsigned int log_size = order_base_2(size);
 
-	if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+	if (log_size >= iovad->rcache_max_size)
 		return false;
 
 	return __iova_rcache_insert(iovad, &iovad->rcaches[log_size], pfn);
@@ -1012,7 +1021,7 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
 {
 	unsigned int log_size = order_base_2(size);
 
-	if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+	if (log_size >= iovad->rcache_max_size)
 		return 0;
 
 	return __iova_rcache_get(&iovad->rcaches[log_size], limit_pfn - size);
@@ -1028,7 +1037,7 @@ static void free_iova_rcaches(struct iova_domain *iovad)
 	unsigned int cpu;
 	int i, j;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		for_each_possible_cpu(cpu) {
 			cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
@@ -1039,6 +1048,8 @@ static void free_iova_rcaches(struct iova_domain *iovad)
 		for (j = 0; j < rcache->depot_size; ++j)
 			iova_magazine_free(rcache->depot[j]);
 	}
+
+	kfree(iovad->rcaches);
 }
 
 /*
@@ -1051,7 +1062,7 @@ static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
 	unsigned long flags;
 	int i;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
 		spin_lock_irqsave(&cpu_rcache->lock, flags);
@@ -1070,7 +1081,7 @@ static void free_global_cached_iovas(struct iova_domain *iovad)
 	unsigned long flags;
 	int i, j;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		spin_lock_irqsave(&rcache->lock, flags);
 		for (j = 0; j < rcache->depot_size; ++j) {
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 71d8a2de6635..9974e1d3e2bc 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -25,7 +25,6 @@ struct iova {
 struct iova_magazine;
 struct iova_cpu_rcache;
 
-#define IOVA_RANGE_CACHE_MAX_SIZE 6	/* log of max cached IOVA range size (in pages) */
 #define MAX_GLOBAL_MAGS 32	/* magazines per bin */
 
 struct iova_rcache {
@@ -74,6 +73,7 @@ struct iova_domain {
 	unsigned long	start_pfn;	/* Lower limit for this domain */
 	unsigned long	dma_32bit_pfn;
 	unsigned long	max32_alloc_size; /* Size of last failed allocation */
+	unsigned long	rcache_max_size; /* Upper limit of cached IOVA RANGE */
 	struct iova_fq __percpu *fq;	/* Flush Queue */
 
 	atomic64_t	fq_flush_start_cnt;	/* Number of TLB flushes that
@@ -83,7 +83,6 @@ struct iova_domain {
 						   have been finished */
 
 	struct iova	anchor;		/* rbtree lookup anchor */
-	struct iova_rcache rcaches[IOVA_RANGE_CACHE_MAX_SIZE];	/* IOVA range caches */
 
 	iova_flush_cb	flush_cb;	/* Call-Back function to flush IOMMU
 					   TLBs */
@@ -96,6 +95,7 @@ struct iova_domain {
 	atomic_t fq_timer_on;			/* 1 when timer is active, 0
 						   when not */
 	struct hlist_node	cpuhp_dead;
+	struct iova_rcache *rcaches;	/* IOVA range caches */
 };
 
 static inline unsigned long iova_size(struct iova *iova)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Some LLDs may request DMA mappings whose IOVA length exceeds that of the
current rcache upper limit.

This means that allocations for those IOVAs will never be cached, and
always must be allocated and freed from the RB tree per DMA mapping cycle.
This has a significant effect on performance, more so since commit
4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
fails"), as discussed at [0].

As a first step towards allowing the rcache range upper limit be
configured, hold this value in the IOVA rcache structure, and allocate
the rcaches separately.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/dma-iommu.c |  2 +-
 drivers/iommu/iova.c      | 23 +++++++++++++++++------
 include/linux/iova.h      |  4 ++--
 3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 98ba927aee1a..4772278aa5da 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
 	 * rounding up anything cacheable to make sure that can't happen. The
 	 * order of the unadjusted size will still match upon freeing.
 	 */
-	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
+	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
 		iova_len = roundup_pow_of_two(iova_len);
 
 	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index b6cf5f16123b..07ce73fdd8c1 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -15,6 +15,8 @@
 /* The anchor node sits above the top of the usable address space */
 #define IOVA_ANCHOR	~0UL
 
+#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */
+
 static bool iova_rcache_insert(struct iova_domain *iovad,
 			       unsigned long pfn,
 			       unsigned long size);
@@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
 	unsigned int cpu;
 	int i;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
+
+	iovad->rcaches = kcalloc(iovad->rcache_max_size,
+				 sizeof(*iovad->rcaches), GFP_KERNEL);
+	if (!iovad->rcaches)
+		return;
+
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		spin_lock_init(&rcache->lock);
 		rcache->depot_size = 0;
@@ -956,7 +965,7 @@ static bool iova_rcache_insert(struct iova_domain *iovad, unsigned long pfn,
 {
 	unsigned int log_size = order_base_2(size);
 
-	if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+	if (log_size >= iovad->rcache_max_size)
 		return false;
 
 	return __iova_rcache_insert(iovad, &iovad->rcaches[log_size], pfn);
@@ -1012,7 +1021,7 @@ static unsigned long iova_rcache_get(struct iova_domain *iovad,
 {
 	unsigned int log_size = order_base_2(size);
 
-	if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+	if (log_size >= iovad->rcache_max_size)
 		return 0;
 
 	return __iova_rcache_get(&iovad->rcaches[log_size], limit_pfn - size);
@@ -1028,7 +1037,7 @@ static void free_iova_rcaches(struct iova_domain *iovad)
 	unsigned int cpu;
 	int i, j;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		for_each_possible_cpu(cpu) {
 			cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
@@ -1039,6 +1048,8 @@ static void free_iova_rcaches(struct iova_domain *iovad)
 		for (j = 0; j < rcache->depot_size; ++j)
 			iova_magazine_free(rcache->depot[j]);
 	}
+
+	kfree(iovad->rcaches);
 }
 
 /*
@@ -1051,7 +1062,7 @@ static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
 	unsigned long flags;
 	int i;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
 		spin_lock_irqsave(&cpu_rcache->lock, flags);
@@ -1070,7 +1081,7 @@ static void free_global_cached_iovas(struct iova_domain *iovad)
 	unsigned long flags;
 	int i, j;
 
-	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+	for (i = 0; i < iovad->rcache_max_size; ++i) {
 		rcache = &iovad->rcaches[i];
 		spin_lock_irqsave(&rcache->lock, flags);
 		for (j = 0; j < rcache->depot_size; ++j) {
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 71d8a2de6635..9974e1d3e2bc 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -25,7 +25,6 @@ struct iova {
 struct iova_magazine;
 struct iova_cpu_rcache;
 
-#define IOVA_RANGE_CACHE_MAX_SIZE 6	/* log of max cached IOVA range size (in pages) */
 #define MAX_GLOBAL_MAGS 32	/* magazines per bin */
 
 struct iova_rcache {
@@ -74,6 +73,7 @@ struct iova_domain {
 	unsigned long	start_pfn;	/* Lower limit for this domain */
 	unsigned long	dma_32bit_pfn;
 	unsigned long	max32_alloc_size; /* Size of last failed allocation */
+	unsigned long	rcache_max_size; /* Upper limit of cached IOVA RANGE */
 	struct iova_fq __percpu *fq;	/* Flush Queue */
 
 	atomic64_t	fq_flush_start_cnt;	/* Number of TLB flushes that
@@ -83,7 +83,6 @@ struct iova_domain {
 						   have been finished */
 
 	struct iova	anchor;		/* rbtree lookup anchor */
-	struct iova_rcache rcaches[IOVA_RANGE_CACHE_MAX_SIZE];	/* IOVA range caches */
 
 	iova_flush_cb	flush_cb;	/* Call-Back function to flush IOMMU
 					   TLBs */
@@ -96,6 +95,7 @@ struct iova_domain {
 	atomic_t fq_timer_on;			/* 1 when timer is active, 0
 						   when not */
 	struct hlist_node	cpuhp_dead;
+	struct iova_rcache *rcaches;	/* IOVA range caches */
 };
 
 static inline unsigned long iova_size(struct iova *iova)
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 3/6] iommu: Allow iommu_change_dev_def_domain() realloc default domain for same type
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Allow iommu_change_dev_def_domain() to create a new default domain, keeping
the same as current when type is unset.

Also remove comment about function purpose, which will become stale.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/iommu.c | 54 ++++++++++++++++++++++---------------------
 1 file changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8a815ac261f0..d8198a9aff4e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3036,6 +3036,7 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle)
 }
 EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
 
+
 /*
  * Changes the default domain of an iommu group that has *only* one device
  *
@@ -3043,16 +3044,13 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
  * @prev_dev: The device in the group (this is used to make sure that the device
  *	 hasn't changed after the caller has called this function)
  * @type: The type of the new default domain that gets associated with the group
+ * @new: Allocate new default domain, keeping same type when no type passed
  *
  * Returns 0 on success and error code on failure
  *
- * Note:
- * 1. Presently, this function is called only when user requests to change the
- *    group's default domain type through /sys/kernel/iommu_groups/<grp_id>/type
- *    Please take a closer look if intended to use for other purposes.
  */
 static int iommu_change_dev_def_domain(struct iommu_group *group,
-				       struct device *prev_dev, int type)
+				       struct device *prev_dev, int type, bool new)
 {
 	struct iommu_domain *prev_dom;
 	struct group_device *grp_dev;
@@ -3102,28 +3100,32 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 		goto out;
 	}
 
-	dev_def_dom = iommu_get_def_domain_type(dev);
-	if (!type) {
+	if (new && !type) {
+		type = prev_dom->type;
+	} else {
+		dev_def_dom = iommu_get_def_domain_type(dev);
+		if (!type) {
+			/*
+			 * If the user hasn't requested any specific type of domain and
+			 * if the device supports both the domains, then default to the
+			 * domain the device was booted with
+			 */
+			type = dev_def_dom ? : iommu_def_domain_type;
+		} else if (dev_def_dom && type != dev_def_dom) {
+			dev_err_ratelimited(prev_dev, "Device cannot be in %s domain\n",
+					    iommu_domain_type_str(type));
+			ret = -EINVAL;
+			goto out;
+		}
+
 		/*
-		 * If the user hasn't requested any specific type of domain and
-		 * if the device supports both the domains, then default to the
-		 * domain the device was booted with
+		 * Switch to a new domain only if the requested domain type is different
+		 * from the existing default domain type
 		 */
-		type = dev_def_dom ? : iommu_def_domain_type;
-	} else if (dev_def_dom && type != dev_def_dom) {
-		dev_err_ratelimited(prev_dev, "Device cannot be in %s domain\n",
-				    iommu_domain_type_str(type));
-		ret = -EINVAL;
-		goto out;
-	}
-
-	/*
-	 * Switch to a new domain only if the requested domain type is different
-	 * from the existing default domain type
-	 */
-	if (prev_dom->type == type) {
-		ret = 0;
-		goto out;
+		if (prev_dom->type == type) {
+			ret = 0;
+			goto out;
+		}
 	}
 
 	/* Sets group->default_domain to the newly allocated domain */
@@ -3267,7 +3269,7 @@ static int iommu_group_store_type_cb(const char *buf,
 	else
 		return -EINVAL;
 
-	return iommu_change_dev_def_domain(group, dev, type);
+	return iommu_change_dev_def_domain(group, dev, type, false);
 }
 
 static ssize_t iommu_group_store_type(struct iommu_group *group,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 3/6] iommu: Allow iommu_change_dev_def_domain() realloc default domain for same type
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Allow iommu_change_dev_def_domain() to create a new default domain, keeping
the same as current when type is unset.

Also remove comment about function purpose, which will become stale.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/iommu.c | 54 ++++++++++++++++++++++---------------------
 1 file changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8a815ac261f0..d8198a9aff4e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3036,6 +3036,7 @@ u32 iommu_sva_get_pasid(struct iommu_sva *handle)
 }
 EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
 
+
 /*
  * Changes the default domain of an iommu group that has *only* one device
  *
@@ -3043,16 +3044,13 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
  * @prev_dev: The device in the group (this is used to make sure that the device
  *	 hasn't changed after the caller has called this function)
  * @type: The type of the new default domain that gets associated with the group
+ * @new: Allocate new default domain, keeping same type when no type passed
  *
  * Returns 0 on success and error code on failure
  *
- * Note:
- * 1. Presently, this function is called only when user requests to change the
- *    group's default domain type through /sys/kernel/iommu_groups/<grp_id>/type
- *    Please take a closer look if intended to use for other purposes.
  */
 static int iommu_change_dev_def_domain(struct iommu_group *group,
-				       struct device *prev_dev, int type)
+				       struct device *prev_dev, int type, bool new)
 {
 	struct iommu_domain *prev_dom;
 	struct group_device *grp_dev;
@@ -3102,28 +3100,32 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 		goto out;
 	}
 
-	dev_def_dom = iommu_get_def_domain_type(dev);
-	if (!type) {
+	if (new && !type) {
+		type = prev_dom->type;
+	} else {
+		dev_def_dom = iommu_get_def_domain_type(dev);
+		if (!type) {
+			/*
+			 * If the user hasn't requested any specific type of domain and
+			 * if the device supports both the domains, then default to the
+			 * domain the device was booted with
+			 */
+			type = dev_def_dom ? : iommu_def_domain_type;
+		} else if (dev_def_dom && type != dev_def_dom) {
+			dev_err_ratelimited(prev_dev, "Device cannot be in %s domain\n",
+					    iommu_domain_type_str(type));
+			ret = -EINVAL;
+			goto out;
+		}
+
 		/*
-		 * If the user hasn't requested any specific type of domain and
-		 * if the device supports both the domains, then default to the
-		 * domain the device was booted with
+		 * Switch to a new domain only if the requested domain type is different
+		 * from the existing default domain type
 		 */
-		type = dev_def_dom ? : iommu_def_domain_type;
-	} else if (dev_def_dom && type != dev_def_dom) {
-		dev_err_ratelimited(prev_dev, "Device cannot be in %s domain\n",
-				    iommu_domain_type_str(type));
-		ret = -EINVAL;
-		goto out;
-	}
-
-	/*
-	 * Switch to a new domain only if the requested domain type is different
-	 * from the existing default domain type
-	 */
-	if (prev_dom->type == type) {
-		ret = 0;
-		goto out;
+		if (prev_dom->type == type) {
+			ret = 0;
+			goto out;
+		}
 	}
 
 	/* Sets group->default_domain to the newly allocated domain */
@@ -3267,7 +3269,7 @@ static int iommu_group_store_type_cb(const char *buf,
 	else
 		return -EINVAL;
 
-	return iommu_change_dev_def_domain(group, dev, type);
+	return iommu_change_dev_def_domain(group, dev, type, false);
 }
 
 static ssize_t iommu_group_store_type(struct iommu_group *group,
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 4/6] iommu: Allow max opt DMA len be set for a group via sysfs
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Add support to allow the maximum optimised DMA len be set for an IOMMU
group via sysfs.

This much the same with the method to change the default domain type for a
group.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 .../ABI/testing/sysfs-kernel-iommu_groups     | 16 ++++++
 drivers/iommu/iommu.c                         | 51 ++++++++++++++++++-
 include/linux/iommu.h                         |  6 +++
 3 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
index eae2f1c1e11e..c5a15b768dcc 100644
--- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups
+++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
@@ -59,3 +59,19 @@ Description:	/sys/kernel/iommu_groups/<grp_id>/type shows the type of default
 		system could lead to catastrophic effects (the users might
 		need to reboot the machine to get it to normal state). So, it's
 		expected that the users understand what they're doing.
+
+What:		/sys/kernel/iommu_groups/<grp_id>/max_opt_dma_size
+Date:		July 2021
+KernelVersion:	v5.15
+Contact:	John Garry <john.garry@huawei.com>
+Description:	/sys/kernel/iommu_groups/<grp_id>/max_opt_dma_size shows the
+		max optimised DMA size for the default IOMMU domain associated
+		with the group.
+		Each IOMMU domain has an IOVA domain. The IOVA domain caches
+		IOVAs upto a certain size as a performance optimisation.
+		This sysfs file allows the range of the IOVA domain caching be
+		set, such that larger than default IOVAs may be cached.
+		A value of 0 means that the default caching range is chosen.
+		A privileged user could request the kernel the change the range
+		by writing to this file. For this to happen, the same rules
+		and procedure applies as in changing the default domain type.
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d8198a9aff4e..38ec1c56e00b 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -45,6 +45,7 @@ struct iommu_group {
 	struct iommu_domain *default_domain;
 	struct iommu_domain *domain;
 	struct list_head entry;
+	size_t max_opt_dma_size;
 };
 
 struct group_device {
@@ -86,6 +87,9 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group,
 static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
 static ssize_t iommu_group_store_type(struct iommu_group *group,
 				      const char *buf, size_t count);
+static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group,
+						  const char *buf,
+						  size_t count);
 
 #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store)		\
 struct iommu_group_attribute iommu_group_attr_##_name =		\
@@ -554,6 +558,12 @@ static ssize_t iommu_group_show_type(struct iommu_group *group,
 	return strlen(type);
 }
 
+static ssize_t iommu_group_show_max_opt_dma_size(struct iommu_group *group,
+				     char *buf)
+{
+	return sprintf(buf, "%zu\n", group->max_opt_dma_size);
+}
+
 static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL);
 
 static IOMMU_GROUP_ATTR(reserved_regions, 0444,
@@ -562,6 +572,9 @@ static IOMMU_GROUP_ATTR(reserved_regions, 0444,
 static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
 			iommu_group_store_type);
 
+static IOMMU_GROUP_ATTR(max_opt_dma_size, 0644, iommu_group_show_max_opt_dma_size,
+			iommu_group_store_max_opt_dma_size);
+
 static void iommu_group_release(struct kobject *kobj)
 {
 	struct iommu_group *group = to_iommu_group(kobj);
@@ -648,6 +661,10 @@ struct iommu_group *iommu_group_alloc(void)
 	if (ret)
 		return ERR_PTR(ret);
 
+	ret = iommu_group_create_file(group, &iommu_group_attr_max_opt_dma_size);
+	if (ret)
+		return ERR_PTR(ret);
+
 	pr_debug("Allocated group %d\n", group->id);
 
 	return group;
@@ -2279,6 +2296,11 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev)
 	return dev->iommu_group->default_domain;
 }
 
+size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group)
+{
+	return group->max_opt_dma_size;
+}
+
 /*
  * IOMMU groups are really the natural working unit of the IOMMU, but
  * the IOMMU API works on domains and devices.  Bridge that gap by
@@ -3045,12 +3067,14 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
  *	 hasn't changed after the caller has called this function)
  * @type: The type of the new default domain that gets associated with the group
  * @new: Allocate new default domain, keeping same type when no type passed
+ * @max_opt_dma_size: If set, set the IOMMU group max_opt_dma_size when success
  *
  * Returns 0 on success and error code on failure
  *
  */
 static int iommu_change_dev_def_domain(struct iommu_group *group,
-				       struct device *prev_dev, int type, bool new)
+				       struct device *prev_dev, int type, bool new,
+				       unsigned long max_opt_dma_size)
 {
 	struct iommu_domain *prev_dom;
 	struct group_device *grp_dev;
@@ -3143,6 +3167,9 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 
 	group->domain = group->default_domain;
 
+	if (max_opt_dma_size)
+		group->max_opt_dma_size = max_opt_dma_size;
+
 	/*
 	 * Release the mutex here because ops->probe_finalize() call-back of
 	 * some vendor IOMMU drivers calls arm_iommu_attach_device() which
@@ -3269,7 +3296,7 @@ static int iommu_group_store_type_cb(const char *buf,
 	else
 		return -EINVAL;
 
-	return iommu_change_dev_def_domain(group, dev, type, false);
+	return iommu_change_dev_def_domain(group, dev, type, false, 0);
 }
 
 static ssize_t iommu_group_store_type(struct iommu_group *group,
@@ -3278,3 +3305,23 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	return iommu_group_store_common(group, buf, count,
 					iommu_group_store_type_cb);
 }
+
+static int iommu_group_store_max_opt_dma_size_cb(const char *buf,
+						 struct iommu_group *group,
+						 struct device *dev)
+{
+	unsigned long val;
+
+	if (kstrtoul(buf, 0, &val) || !val)
+		return -EINVAL;
+
+	return iommu_change_dev_def_domain(group, dev, 0, true, val);
+}
+
+static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group,
+						  const char *buf,
+						  size_t count)
+{
+	return iommu_group_store_common(group, buf, count,
+					iommu_group_store_max_opt_dma_size_cb);
+}
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..e26abda94792 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -405,6 +405,7 @@ extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain,
 				   struct device *dev, ioasid_t pasid);
 extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
 extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
+extern size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group);
 extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
 		     phys_addr_t paddr, size_t size, int prot);
 extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
@@ -653,6 +654,11 @@ static inline struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
 	return NULL;
 }
 
+static inline size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group)
+{
+	return 0;
+}
+
 static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
 			    phys_addr_t paddr, size_t size, int prot)
 {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 4/6] iommu: Allow max opt DMA len be set for a group via sysfs
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Add support to allow the maximum optimised DMA len be set for an IOMMU
group via sysfs.

This much the same with the method to change the default domain type for a
group.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 .../ABI/testing/sysfs-kernel-iommu_groups     | 16 ++++++
 drivers/iommu/iommu.c                         | 51 ++++++++++++++++++-
 include/linux/iommu.h                         |  6 +++
 3 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
index eae2f1c1e11e..c5a15b768dcc 100644
--- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups
+++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
@@ -59,3 +59,19 @@ Description:	/sys/kernel/iommu_groups/<grp_id>/type shows the type of default
 		system could lead to catastrophic effects (the users might
 		need to reboot the machine to get it to normal state). So, it's
 		expected that the users understand what they're doing.
+
+What:		/sys/kernel/iommu_groups/<grp_id>/max_opt_dma_size
+Date:		July 2021
+KernelVersion:	v5.15
+Contact:	John Garry <john.garry@huawei.com>
+Description:	/sys/kernel/iommu_groups/<grp_id>/max_opt_dma_size shows the
+		max optimised DMA size for the default IOMMU domain associated
+		with the group.
+		Each IOMMU domain has an IOVA domain. The IOVA domain caches
+		IOVAs upto a certain size as a performance optimisation.
+		This sysfs file allows the range of the IOVA domain caching be
+		set, such that larger than default IOVAs may be cached.
+		A value of 0 means that the default caching range is chosen.
+		A privileged user could request the kernel the change the range
+		by writing to this file. For this to happen, the same rules
+		and procedure applies as in changing the default domain type.
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d8198a9aff4e..38ec1c56e00b 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -45,6 +45,7 @@ struct iommu_group {
 	struct iommu_domain *default_domain;
 	struct iommu_domain *domain;
 	struct list_head entry;
+	size_t max_opt_dma_size;
 };
 
 struct group_device {
@@ -86,6 +87,9 @@ static int iommu_create_device_direct_mappings(struct iommu_group *group,
 static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
 static ssize_t iommu_group_store_type(struct iommu_group *group,
 				      const char *buf, size_t count);
+static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group,
+						  const char *buf,
+						  size_t count);
 
 #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store)		\
 struct iommu_group_attribute iommu_group_attr_##_name =		\
@@ -554,6 +558,12 @@ static ssize_t iommu_group_show_type(struct iommu_group *group,
 	return strlen(type);
 }
 
+static ssize_t iommu_group_show_max_opt_dma_size(struct iommu_group *group,
+				     char *buf)
+{
+	return sprintf(buf, "%zu\n", group->max_opt_dma_size);
+}
+
 static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL);
 
 static IOMMU_GROUP_ATTR(reserved_regions, 0444,
@@ -562,6 +572,9 @@ static IOMMU_GROUP_ATTR(reserved_regions, 0444,
 static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
 			iommu_group_store_type);
 
+static IOMMU_GROUP_ATTR(max_opt_dma_size, 0644, iommu_group_show_max_opt_dma_size,
+			iommu_group_store_max_opt_dma_size);
+
 static void iommu_group_release(struct kobject *kobj)
 {
 	struct iommu_group *group = to_iommu_group(kobj);
@@ -648,6 +661,10 @@ struct iommu_group *iommu_group_alloc(void)
 	if (ret)
 		return ERR_PTR(ret);
 
+	ret = iommu_group_create_file(group, &iommu_group_attr_max_opt_dma_size);
+	if (ret)
+		return ERR_PTR(ret);
+
 	pr_debug("Allocated group %d\n", group->id);
 
 	return group;
@@ -2279,6 +2296,11 @@ struct iommu_domain *iommu_get_dma_domain(struct device *dev)
 	return dev->iommu_group->default_domain;
 }
 
+size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group)
+{
+	return group->max_opt_dma_size;
+}
+
 /*
  * IOMMU groups are really the natural working unit of the IOMMU, but
  * the IOMMU API works on domains and devices.  Bridge that gap by
@@ -3045,12 +3067,14 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
  *	 hasn't changed after the caller has called this function)
  * @type: The type of the new default domain that gets associated with the group
  * @new: Allocate new default domain, keeping same type when no type passed
+ * @max_opt_dma_size: If set, set the IOMMU group max_opt_dma_size when success
  *
  * Returns 0 on success and error code on failure
  *
  */
 static int iommu_change_dev_def_domain(struct iommu_group *group,
-				       struct device *prev_dev, int type, bool new)
+				       struct device *prev_dev, int type, bool new,
+				       unsigned long max_opt_dma_size)
 {
 	struct iommu_domain *prev_dom;
 	struct group_device *grp_dev;
@@ -3143,6 +3167,9 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
 
 	group->domain = group->default_domain;
 
+	if (max_opt_dma_size)
+		group->max_opt_dma_size = max_opt_dma_size;
+
 	/*
 	 * Release the mutex here because ops->probe_finalize() call-back of
 	 * some vendor IOMMU drivers calls arm_iommu_attach_device() which
@@ -3269,7 +3296,7 @@ static int iommu_group_store_type_cb(const char *buf,
 	else
 		return -EINVAL;
 
-	return iommu_change_dev_def_domain(group, dev, type, false);
+	return iommu_change_dev_def_domain(group, dev, type, false, 0);
 }
 
 static ssize_t iommu_group_store_type(struct iommu_group *group,
@@ -3278,3 +3305,23 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
 	return iommu_group_store_common(group, buf, count,
 					iommu_group_store_type_cb);
 }
+
+static int iommu_group_store_max_opt_dma_size_cb(const char *buf,
+						 struct iommu_group *group,
+						 struct device *dev)
+{
+	unsigned long val;
+
+	if (kstrtoul(buf, 0, &val) || !val)
+		return -EINVAL;
+
+	return iommu_change_dev_def_domain(group, dev, 0, true, val);
+}
+
+static ssize_t iommu_group_store_max_opt_dma_size(struct iommu_group *group,
+						  const char *buf,
+						  size_t count)
+{
+	return iommu_group_store_common(group, buf, count,
+					iommu_group_store_max_opt_dma_size_cb);
+}
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..e26abda94792 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -405,6 +405,7 @@ extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain,
 				   struct device *dev, ioasid_t pasid);
 extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
 extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
+extern size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group);
 extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
 		     phys_addr_t paddr, size_t size, int prot);
 extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
@@ -653,6 +654,11 @@ static inline struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
 	return NULL;
 }
 
+static inline size_t iommu_group_get_max_opt_dma_size(struct iommu_group *group)
+{
+	return 0;
+}
+
 static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
 			    phys_addr_t paddr, size_t size, int prot)
 {
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Add max opt argument to init_iova_domain(), and use it to set the rcaches
range.

Also fix up all users to set this value (at 0, meaning use default).

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/gpu/drm/tegra/drm.c              |  2 +-
 drivers/gpu/host1x/dev.c                 |  2 +-
 drivers/iommu/dma-iommu.c                |  2 +-
 drivers/iommu/iova.c                     | 18 +++++++++++++-----
 drivers/staging/media/ipu3/ipu3-dmamap.c |  2 +-
 drivers/staging/media/tegra-vde/iommu.c  |  2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c         |  2 +-
 include/linux/iova.h                     |  5 +++--
 8 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index f96c237b2242..c5fb2396ac81 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -1164,7 +1164,7 @@ static int host1x_drm_probe(struct host1x_device *dev)
 
 		order = __ffs(tegra->domain->pgsize_bitmap);
 		init_iova_domain(&tegra->carveout.domain, 1UL << order,
-				 carveout_start >> order);
+				 carveout_start >> order, 0);
 
 		tegra->carveout.shift = iova_shift(&tegra->carveout.domain);
 		tegra->carveout.limit = carveout_end >> tegra->carveout.shift;
diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
index fbb6447b8659..3cd02ffbd50e 100644
--- a/drivers/gpu/host1x/dev.c
+++ b/drivers/gpu/host1x/dev.c
@@ -278,7 +278,7 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
 		end = geometry->aperture_end & host->info->dma_mask;
 
 		order = __ffs(host->domain->pgsize_bitmap);
-		init_iova_domain(&host->iova, 1UL << order, start >> order);
+		init_iova_domain(&host->iova, 1UL << order, start >> order, 0);
 		host->iova_end = end;
 
 		domain = host->domain;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4772278aa5da..b540b586fe37 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -368,7 +368,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 		return 0;
 	}
 
-	init_iova_domain(iovad, 1UL << order, base_pfn);
+	init_iova_domain(iovad, 1UL << order, base_pfn, 0);
 
 	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
 	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 07ce73fdd8c1..0c26aeada1ac 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -23,7 +23,7 @@ static bool iova_rcache_insert(struct iova_domain *iovad,
 static unsigned long iova_rcache_get(struct iova_domain *iovad,
 				     unsigned long size,
 				     unsigned long limit_pfn);
-static void init_iova_rcaches(struct iova_domain *iovad);
+static void init_iova_rcaches(struct iova_domain *iovad, unsigned long iova_len);
 static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
 static void free_iova_rcaches(struct iova_domain *iovad);
 static void fq_destroy_all_entries(struct iova_domain *iovad);
@@ -48,7 +48,7 @@ static struct iova *to_iova(struct rb_node *node)
 
 void
 init_iova_domain(struct iova_domain *iovad, unsigned long granule,
-	unsigned long start_pfn)
+	unsigned long start_pfn, unsigned long iova_len)
 {
 	/*
 	 * IOVA granularity will normally be equal to the smallest
@@ -71,7 +71,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 	rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node);
 	rb_insert_color(&iovad->anchor.node, &iovad->rbroot);
 	cpuhp_state_add_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, &iovad->cpuhp_dead);
-	init_iova_rcaches(iovad);
+	init_iova_rcaches(iovad, iova_len);
 }
 EXPORT_SYMBOL_GPL(init_iova_domain);
 
@@ -876,14 +876,22 @@ static void iova_magazine_push(struct iova_magazine *mag, unsigned long pfn)
 	mag->pfns[mag->size++] = pfn;
 }
 
-static void init_iova_rcaches(struct iova_domain *iovad)
+static unsigned long iova_len_to_rcache_max(unsigned long iova_len)
+{
+	return order_base_2(iova_len) + 1;
+}
+
+static void init_iova_rcaches(struct iova_domain *iovad, unsigned long iova_len)
 {
 	struct iova_cpu_rcache *cpu_rcache;
 	struct iova_rcache *rcache;
 	unsigned int cpu;
 	int i;
 
-	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
+	if (iova_len)
+		iovad->rcache_max_size = iova_len_to_rcache_max(iova_len);
+	else
+		iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
 
 	iovad->rcaches = kcalloc(iovad->rcache_max_size,
 				 sizeof(*iovad->rcaches), GFP_KERNEL);
diff --git a/drivers/staging/media/ipu3/ipu3-dmamap.c b/drivers/staging/media/ipu3/ipu3-dmamap.c
index 8a19b0024152..dad8789873e8 100644
--- a/drivers/staging/media/ipu3/ipu3-dmamap.c
+++ b/drivers/staging/media/ipu3/ipu3-dmamap.c
@@ -238,7 +238,7 @@ int imgu_dmamap_init(struct imgu_device *imgu)
 
 	order = __ffs(IPU3_PAGE_SIZE);
 	base_pfn = max_t(unsigned long, 1, imgu->mmu->aperture_start >> order);
-	init_iova_domain(&imgu->iova_domain, 1UL << order, base_pfn);
+	init_iova_domain(&imgu->iova_domain, 1UL << order, base_pfn, 0);
 
 	return 0;
 }
diff --git a/drivers/staging/media/tegra-vde/iommu.c b/drivers/staging/media/tegra-vde/iommu.c
index adf8dc7ee25c..8d351a4f8608 100644
--- a/drivers/staging/media/tegra-vde/iommu.c
+++ b/drivers/staging/media/tegra-vde/iommu.c
@@ -89,7 +89,7 @@ int tegra_vde_iommu_init(struct tegra_vde *vde)
 		goto free_domain;
 
 	order = __ffs(vde->domain->pgsize_bitmap);
-	init_iova_domain(&vde->iova, 1UL << order, 0);
+	init_iova_domain(&vde->iova, 1UL << order, 0, 0);
 
 	err = iommu_attach_group(vde->domain, vde->group);
 	if (err)
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 14e024de5cbf..a4bdff10157f 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -292,7 +292,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
 		goto err_iommu;
 
 	/* For simplicity we use an IOVA allocator with byte granularity */
-	init_iova_domain(&vdpasim->iova, 1, 0);
+	init_iova_domain(&vdpasim->iova, 1, 0, 0);
 
 	vdpasim->vdpa.dma_dev = dev;
 
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 9974e1d3e2bc..591c736fdb7f 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -152,7 +152,7 @@ unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
 struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 	unsigned long pfn_hi);
 void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
-	unsigned long start_pfn);
+	unsigned long start_pfn, unsigned long iova_len);
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
 struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
@@ -212,7 +212,8 @@ static inline struct iova *reserve_iova(struct iova_domain *iovad,
 
 static inline void init_iova_domain(struct iova_domain *iovad,
 				    unsigned long granule,
-				    unsigned long start_pfn)
+				    unsigned long start_pfn,
+				    unsigned long iova_len)
 {
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Add max opt argument to init_iova_domain(), and use it to set the rcaches
range.

Also fix up all users to set this value (at 0, meaning use default).

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/gpu/drm/tegra/drm.c              |  2 +-
 drivers/gpu/host1x/dev.c                 |  2 +-
 drivers/iommu/dma-iommu.c                |  2 +-
 drivers/iommu/iova.c                     | 18 +++++++++++++-----
 drivers/staging/media/ipu3/ipu3-dmamap.c |  2 +-
 drivers/staging/media/tegra-vde/iommu.c  |  2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c         |  2 +-
 include/linux/iova.h                     |  5 +++--
 8 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index f96c237b2242..c5fb2396ac81 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -1164,7 +1164,7 @@ static int host1x_drm_probe(struct host1x_device *dev)
 
 		order = __ffs(tegra->domain->pgsize_bitmap);
 		init_iova_domain(&tegra->carveout.domain, 1UL << order,
-				 carveout_start >> order);
+				 carveout_start >> order, 0);
 
 		tegra->carveout.shift = iova_shift(&tegra->carveout.domain);
 		tegra->carveout.limit = carveout_end >> tegra->carveout.shift;
diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
index fbb6447b8659..3cd02ffbd50e 100644
--- a/drivers/gpu/host1x/dev.c
+++ b/drivers/gpu/host1x/dev.c
@@ -278,7 +278,7 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
 		end = geometry->aperture_end & host->info->dma_mask;
 
 		order = __ffs(host->domain->pgsize_bitmap);
-		init_iova_domain(&host->iova, 1UL << order, start >> order);
+		init_iova_domain(&host->iova, 1UL << order, start >> order, 0);
 		host->iova_end = end;
 
 		domain = host->domain;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4772278aa5da..b540b586fe37 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -368,7 +368,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 		return 0;
 	}
 
-	init_iova_domain(iovad, 1UL << order, base_pfn);
+	init_iova_domain(iovad, 1UL << order, base_pfn, 0);
 
 	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
 	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 07ce73fdd8c1..0c26aeada1ac 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -23,7 +23,7 @@ static bool iova_rcache_insert(struct iova_domain *iovad,
 static unsigned long iova_rcache_get(struct iova_domain *iovad,
 				     unsigned long size,
 				     unsigned long limit_pfn);
-static void init_iova_rcaches(struct iova_domain *iovad);
+static void init_iova_rcaches(struct iova_domain *iovad, unsigned long iova_len);
 static void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
 static void free_iova_rcaches(struct iova_domain *iovad);
 static void fq_destroy_all_entries(struct iova_domain *iovad);
@@ -48,7 +48,7 @@ static struct iova *to_iova(struct rb_node *node)
 
 void
 init_iova_domain(struct iova_domain *iovad, unsigned long granule,
-	unsigned long start_pfn)
+	unsigned long start_pfn, unsigned long iova_len)
 {
 	/*
 	 * IOVA granularity will normally be equal to the smallest
@@ -71,7 +71,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 	rb_link_node(&iovad->anchor.node, NULL, &iovad->rbroot.rb_node);
 	rb_insert_color(&iovad->anchor.node, &iovad->rbroot);
 	cpuhp_state_add_instance_nocalls(CPUHP_IOMMU_IOVA_DEAD, &iovad->cpuhp_dead);
-	init_iova_rcaches(iovad);
+	init_iova_rcaches(iovad, iova_len);
 }
 EXPORT_SYMBOL_GPL(init_iova_domain);
 
@@ -876,14 +876,22 @@ static void iova_magazine_push(struct iova_magazine *mag, unsigned long pfn)
 	mag->pfns[mag->size++] = pfn;
 }
 
-static void init_iova_rcaches(struct iova_domain *iovad)
+static unsigned long iova_len_to_rcache_max(unsigned long iova_len)
+{
+	return order_base_2(iova_len) + 1;
+}
+
+static void init_iova_rcaches(struct iova_domain *iovad, unsigned long iova_len)
 {
 	struct iova_cpu_rcache *cpu_rcache;
 	struct iova_rcache *rcache;
 	unsigned int cpu;
 	int i;
 
-	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
+	if (iova_len)
+		iovad->rcache_max_size = iova_len_to_rcache_max(iova_len);
+	else
+		iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
 
 	iovad->rcaches = kcalloc(iovad->rcache_max_size,
 				 sizeof(*iovad->rcaches), GFP_KERNEL);
diff --git a/drivers/staging/media/ipu3/ipu3-dmamap.c b/drivers/staging/media/ipu3/ipu3-dmamap.c
index 8a19b0024152..dad8789873e8 100644
--- a/drivers/staging/media/ipu3/ipu3-dmamap.c
+++ b/drivers/staging/media/ipu3/ipu3-dmamap.c
@@ -238,7 +238,7 @@ int imgu_dmamap_init(struct imgu_device *imgu)
 
 	order = __ffs(IPU3_PAGE_SIZE);
 	base_pfn = max_t(unsigned long, 1, imgu->mmu->aperture_start >> order);
-	init_iova_domain(&imgu->iova_domain, 1UL << order, base_pfn);
+	init_iova_domain(&imgu->iova_domain, 1UL << order, base_pfn, 0);
 
 	return 0;
 }
diff --git a/drivers/staging/media/tegra-vde/iommu.c b/drivers/staging/media/tegra-vde/iommu.c
index adf8dc7ee25c..8d351a4f8608 100644
--- a/drivers/staging/media/tegra-vde/iommu.c
+++ b/drivers/staging/media/tegra-vde/iommu.c
@@ -89,7 +89,7 @@ int tegra_vde_iommu_init(struct tegra_vde *vde)
 		goto free_domain;
 
 	order = __ffs(vde->domain->pgsize_bitmap);
-	init_iova_domain(&vde->iova, 1UL << order, 0);
+	init_iova_domain(&vde->iova, 1UL << order, 0, 0);
 
 	err = iommu_attach_group(vde->domain, vde->group);
 	if (err)
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 14e024de5cbf..a4bdff10157f 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -292,7 +292,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
 		goto err_iommu;
 
 	/* For simplicity we use an IOVA allocator with byte granularity */
-	init_iova_domain(&vdpasim->iova, 1, 0);
+	init_iova_domain(&vdpasim->iova, 1, 0, 0);
 
 	vdpasim->vdpa.dma_dev = dev;
 
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 9974e1d3e2bc..591c736fdb7f 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -152,7 +152,7 @@ unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
 struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 	unsigned long pfn_hi);
 void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
-	unsigned long start_pfn);
+	unsigned long start_pfn, unsigned long iova_len);
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
 struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
@@ -212,7 +212,8 @@ static inline struct iova *reserve_iova(struct iova_domain *iovad,
 
 static inline void init_iova_domain(struct iova_domain *iovad,
 				    unsigned long granule,
-				    unsigned long start_pfn)
+				    unsigned long start_pfn,
+				    unsigned long iova_len)
 {
 }
 
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
  2021-07-14 10:36 ` John Garry
@ 2021-07-14 10:36   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: iommu, linuxarm, thierry.reding, airlied, daniel, jonathanh,
	sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab, gregkh, digetx,
	mst, jasowang, linux-kernel, chenxiang66, John Garry

Pass the max opt iova len to init the IOVA domain, if set.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/dma-iommu.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b540b586fe37..eee9f5f87935 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -335,6 +335,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 	struct iommu_dma_cookie *cookie = domain->iova_cookie;
 	unsigned long order, base_pfn;
 	struct iova_domain *iovad;
+	size_t max_opt_dma_size;
+	unsigned long iova_len = 0;
 
 	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
 		return -EINVAL;
@@ -368,7 +370,16 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 		return 0;
 	}
 
-	init_iova_domain(iovad, 1UL << order, base_pfn, 0);
+
+	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
+	if (max_opt_dma_size) {
+		unsigned long shift = __ffs(1UL << order);
+
+		iova_len = max_opt_dma_size >> shift;
+		iova_len = roundup_pow_of_two(iova_len);
+	}
+
+	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
 
 	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
 	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-14 10:36   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-14 10:36 UTC (permalink / raw)
  To: joro, will, robin.murphy, baolu.lu
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, jasowang,
	linuxarm, jonathanh, iommu, thierry.reding, daniel, bingbu.cao,
	digetx, mchehab, tian.shu.qiu

Pass the max opt iova len to init the IOVA domain, if set.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/iommu/dma-iommu.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b540b586fe37..eee9f5f87935 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -335,6 +335,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 	struct iommu_dma_cookie *cookie = domain->iova_cookie;
 	unsigned long order, base_pfn;
 	struct iova_domain *iovad;
+	size_t max_opt_dma_size;
+	unsigned long iova_len = 0;
 
 	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
 		return -EINVAL;
@@ -368,7 +370,16 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 		return 0;
 	}
 
-	init_iova_domain(iovad, 1UL << order, base_pfn, 0);
+
+	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
+	if (max_opt_dma_size) {
+		unsigned long shift = __ffs(1UL << order);
+
+		iova_len = max_opt_dma_size >> shift;
+		iova_len = roundup_pow_of_two(iova_len);
+	}
+
+	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
 
 	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
 	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
-- 
2.26.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
  2021-07-14 10:36   ` John Garry
  (?)
@ 2021-07-19  7:58 ` Dan Carpenter
  -1 siblings, 0 replies; 43+ messages in thread
From: kernel test robot @ 2021-07-15  1:36 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 8704 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <1626259003-201303-7-git-send-email-john.garry@huawei.com>
References: <1626259003-201303-7-git-send-email-john.garry@huawei.com>
TO: John Garry <john.garry@huawei.com>
TO: joro(a)8bytes.org
TO: will(a)kernel.org
TO: robin.murphy(a)arm.com
TO: baolu.lu(a)linux.intel.com
CC: iommu(a)lists.linux-foundation.org
CC: linuxarm(a)huawei.com
CC: thierry.reding(a)gmail.com
CC: airlied(a)linux.ie
CC: daniel(a)ffwll.ch
CC: jonathanh(a)nvidia.com

Hi John,

I love your patch! Perhaps something to improve:

[auto build test WARNING on iommu/next]
[also build test WARNING on linuxtv-media/master tegra/for-next tegra-drm/drm/tegra/for-next linus/master v5.14-rc1 next-20210714]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
:::::: branch date: 15 hours ago
:::::: commit date: 15 hours ago
config: ia64-randconfig-m031-20210714 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable dereferenced before check 'dev' (see line 374)

vim +/dev +384 drivers/iommu/dma-iommu.c

82c3cefb9f1652 Lu Baolu              2021-02-25  319  
0db2e5d18f76a6 Robin Murphy          2015-10-01  320  /**
0db2e5d18f76a6 Robin Murphy          2015-10-01  321   * iommu_dma_init_domain - Initialise a DMA mapping domain
0db2e5d18f76a6 Robin Murphy          2015-10-01  322   * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
0db2e5d18f76a6 Robin Murphy          2015-10-01  323   * @base: IOVA at which the mappable address space starts
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  324   * @limit: Last address of the IOVA space
fade1ec055dc6b Robin Murphy          2016-09-12  325   * @dev: Device the domain is being initialised for
0db2e5d18f76a6 Robin Murphy          2015-10-01  326   *
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  327   * @base and @limit + 1 should be exact multiples of IOMMU page granularity to
0db2e5d18f76a6 Robin Murphy          2015-10-01  328   * avoid rounding surprises. If necessary, we reserve the page at address 0
0db2e5d18f76a6 Robin Murphy          2015-10-01  329   * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
0db2e5d18f76a6 Robin Murphy          2015-10-01  330   * any change which could make prior IOVAs invalid will fail.
0db2e5d18f76a6 Robin Murphy          2015-10-01  331   */
06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333  				 dma_addr_t limit, struct device *dev)
0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
fdbe574eb69312 Robin Murphy          2017-01-19  335  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
c61a4633a56aaa Shaokun Zhang         2019-01-24  336  	unsigned long order, base_pfn;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  337  	struct iova_domain *iovad;
de4ba360c3e4ed John Garry            2021-07-14  338  	size_t max_opt_dma_size;
de4ba360c3e4ed John Garry            2021-07-14  339  	unsigned long iova_len = 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  340  
fdbe574eb69312 Robin Murphy          2017-01-19  341  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
fdbe574eb69312 Robin Murphy          2017-01-19  342  		return -EINVAL;
0db2e5d18f76a6 Robin Murphy          2015-10-01  343  
6b0c54e7f27159 Yunsheng Lin          2019-08-24  344  	iovad = &cookie->iovad;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  345  
0db2e5d18f76a6 Robin Murphy          2015-10-01  346  	/* Use the smallest supported page size for IOVA granularity */
d16e0faab911cc Robin Murphy          2016-04-07  347  	order = __ffs(domain->pgsize_bitmap);
0db2e5d18f76a6 Robin Murphy          2015-10-01  348  	base_pfn = max_t(unsigned long, 1, base >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  349  
0db2e5d18f76a6 Robin Murphy          2015-10-01  350  	/* Check the domain allows at least some access to the device... */
0db2e5d18f76a6 Robin Murphy          2015-10-01  351  	if (domain->geometry.force_aperture) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  352  		if (base > domain->geometry.aperture_end ||
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353  		    limit < domain->geometry.aperture_start) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  354  			pr_warn("specified DMA range outside IOMMU capability\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  355  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  356  		}
0db2e5d18f76a6 Robin Murphy          2015-10-01  357  		/* ...then finally give it a kicking to make sure it fits */
0db2e5d18f76a6 Robin Murphy          2015-10-01  358  		base_pfn = max_t(unsigned long, base_pfn,
0db2e5d18f76a6 Robin Murphy          2015-10-01  359  				domain->geometry.aperture_start >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  360  	}
0db2e5d18f76a6 Robin Murphy          2015-10-01  361  
f51d7bb79c1124 Robin Murphy          2017-01-16  362  	/* start_pfn is always nonzero for an already-initialised domain */
0db2e5d18f76a6 Robin Murphy          2015-10-01  363  	if (iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  364  		if (1UL << order != iovad->granule ||
f51d7bb79c1124 Robin Murphy          2017-01-16  365  		    base_pfn != iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  366  			pr_warn("Incompatible range for DMA domain\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  367  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  368  		}
7c1b058c8b5a31 Robin Murphy          2017-03-16  369  
7c1b058c8b5a31 Robin Murphy          2017-03-16  370  		return 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  371  	}
7c1b058c8b5a31 Robin Murphy          2017-03-16  372  
de4ba360c3e4ed John Garry            2021-07-14  373  
de4ba360c3e4ed John Garry            2021-07-14 @374  	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
de4ba360c3e4ed John Garry            2021-07-14  375  	if (max_opt_dma_size) {
de4ba360c3e4ed John Garry            2021-07-14  376  		unsigned long shift = __ffs(1UL << order);
de4ba360c3e4ed John Garry            2021-07-14  377  
de4ba360c3e4ed John Garry            2021-07-14  378  		iova_len = max_opt_dma_size >> shift;
de4ba360c3e4ed John Garry            2021-07-14  379  		iova_len = roundup_pow_of_two(iova_len);
de4ba360c3e4ed John Garry            2021-07-14  380  	}
de4ba360c3e4ed John Garry            2021-07-14  381  
de4ba360c3e4ed John Garry            2021-07-14  382  	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
2da274cdf998a1 Zhen Lei              2018-09-20  383  
82c3cefb9f1652 Lu Baolu              2021-02-25 @384  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
a250c23f15c21c Robin Murphy          2021-04-01  385  	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
b34e9b0de3c411 Tom Murphy            2020-09-10  386  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
2a2b8eaa5b2566 Tom Murphy            2020-11-24  387  					  iommu_dma_entry_dtor))
b34e9b0de3c411 Tom Murphy            2020-09-10  388  			pr_warn("iova flush queue initialization failed\n");
b34e9b0de3c411 Tom Murphy            2020-09-10  389  		else
2da274cdf998a1 Zhen Lei              2018-09-20  390  			cookie->fq_domain = domain;
2da274cdf998a1 Zhen Lei              2018-09-20  391  	}
2da274cdf998a1 Zhen Lei              2018-09-20  392  
7c1b058c8b5a31 Robin Murphy          2017-03-16  393  	if (!dev)
0db2e5d18f76a6 Robin Murphy          2015-10-01  394  		return 0;
7c1b058c8b5a31 Robin Murphy          2017-03-16  395  
7c1b058c8b5a31 Robin Murphy          2017-03-16  396  	return iova_reserve_iommu_regions(dev, domain);
0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }
0db2e5d18f76a6 Robin Murphy          2015-10-01  398  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 36406 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-19  7:58 ` Dan Carpenter
  0 siblings, 0 replies; 43+ messages in thread
From: Dan Carpenter @ 2021-07-19  7:58 UTC (permalink / raw)
  To: kbuild, John Garry, joro, will, robin.murphy, baolu.lu
  Cc: kbuild-all, lkp, airlied, linuxarm, jonathanh, iommu,
	thierry.reding, daniel

Hi John,

url:    https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: ia64-randconfig-m031-20210714 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable dereferenced before check 'dev' (see line 374)

vim +/dev +384 drivers/iommu/dma-iommu.c

06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333  				 dma_addr_t limit, struct device *dev)
0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
fdbe574eb69312 Robin Murphy          2017-01-19  335  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
c61a4633a56aaa Shaokun Zhang         2019-01-24  336  	unsigned long order, base_pfn;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  337  	struct iova_domain *iovad;
de4ba360c3e4ed John Garry            2021-07-14  338  	size_t max_opt_dma_size;
de4ba360c3e4ed John Garry            2021-07-14  339  	unsigned long iova_len = 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  340  
fdbe574eb69312 Robin Murphy          2017-01-19  341  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
fdbe574eb69312 Robin Murphy          2017-01-19  342  		return -EINVAL;
0db2e5d18f76a6 Robin Murphy          2015-10-01  343  
6b0c54e7f27159 Yunsheng Lin          2019-08-24  344  	iovad = &cookie->iovad;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  345  
0db2e5d18f76a6 Robin Murphy          2015-10-01  346  	/* Use the smallest supported page size for IOVA granularity */
d16e0faab911cc Robin Murphy          2016-04-07  347  	order = __ffs(domain->pgsize_bitmap);
0db2e5d18f76a6 Robin Murphy          2015-10-01  348  	base_pfn = max_t(unsigned long, 1, base >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  349  
0db2e5d18f76a6 Robin Murphy          2015-10-01  350  	/* Check the domain allows at least some access to the device... */
0db2e5d18f76a6 Robin Murphy          2015-10-01  351  	if (domain->geometry.force_aperture) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  352  		if (base > domain->geometry.aperture_end ||
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353  		    limit < domain->geometry.aperture_start) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  354  			pr_warn("specified DMA range outside IOMMU capability\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  355  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  356  		}
0db2e5d18f76a6 Robin Murphy          2015-10-01  357  		/* ...then finally give it a kicking to make sure it fits */
0db2e5d18f76a6 Robin Murphy          2015-10-01  358  		base_pfn = max_t(unsigned long, base_pfn,
0db2e5d18f76a6 Robin Murphy          2015-10-01  359  				domain->geometry.aperture_start >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  360  	}
0db2e5d18f76a6 Robin Murphy          2015-10-01  361  
f51d7bb79c1124 Robin Murphy          2017-01-16  362  	/* start_pfn is always nonzero for an already-initialised domain */
0db2e5d18f76a6 Robin Murphy          2015-10-01  363  	if (iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  364  		if (1UL << order != iovad->granule ||
f51d7bb79c1124 Robin Murphy          2017-01-16  365  		    base_pfn != iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  366  			pr_warn("Incompatible range for DMA domain\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  367  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  368  		}
7c1b058c8b5a31 Robin Murphy          2017-03-16  369  
7c1b058c8b5a31 Robin Murphy          2017-03-16  370  		return 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  371  	}
7c1b058c8b5a31 Robin Murphy          2017-03-16  372  
de4ba360c3e4ed John Garry            2021-07-14  373  
de4ba360c3e4ed John Garry            2021-07-14 @374  	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
                                                                                                            ^^^^^^^^^^^^^^^^
New unchecked dereference

de4ba360c3e4ed John Garry            2021-07-14  375  	if (max_opt_dma_size) {
de4ba360c3e4ed John Garry            2021-07-14  376  		unsigned long shift = __ffs(1UL << order);
de4ba360c3e4ed John Garry            2021-07-14  377  
de4ba360c3e4ed John Garry            2021-07-14  378  		iova_len = max_opt_dma_size >> shift;
de4ba360c3e4ed John Garry            2021-07-14  379  		iova_len = roundup_pow_of_two(iova_len);
de4ba360c3e4ed John Garry            2021-07-14  380  	}
de4ba360c3e4ed John Garry            2021-07-14  381  
de4ba360c3e4ed John Garry            2021-07-14  382  	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
2da274cdf998a1 Zhen Lei              2018-09-20  383  
82c3cefb9f1652 Lu Baolu              2021-02-25 @384  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
                                                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


a250c23f15c21c Robin Murphy          2021-04-01  385  	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
b34e9b0de3c411 Tom Murphy            2020-09-10  386  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
2a2b8eaa5b2566 Tom Murphy            2020-11-24  387  					  iommu_dma_entry_dtor))
b34e9b0de3c411 Tom Murphy            2020-09-10  388  			pr_warn("iova flush queue initialization failed\n");
b34e9b0de3c411 Tom Murphy            2020-09-10  389  		else
2da274cdf998a1 Zhen Lei              2018-09-20  390  			cookie->fq_domain = domain;
2da274cdf998a1 Zhen Lei              2018-09-20  391  	}
2da274cdf998a1 Zhen Lei              2018-09-20  392  
7c1b058c8b5a31 Robin Murphy          2017-03-16  393  	if (!dev)
                                                            ^^^^
Old code has checks for NULL

0db2e5d18f76a6 Robin Murphy          2015-10-01  394  		return 0;
7c1b058c8b5a31 Robin Murphy          2017-03-16  395  
7c1b058c8b5a31 Robin Murphy          2017-03-16  396  	return iova_reserve_iommu_regions(dev, domain);
0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-19  7:58 ` Dan Carpenter
  0 siblings, 0 replies; 43+ messages in thread
From: Dan Carpenter @ 2021-07-19  7:58 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 6735 bytes --]

Hi John,

url:    https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: ia64-randconfig-m031-20210714 (attached as .config)
compiler: ia64-linux-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable dereferenced before check 'dev' (see line 374)

vim +/dev +384 drivers/iommu/dma-iommu.c

06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333  				 dma_addr_t limit, struct device *dev)
0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
fdbe574eb69312 Robin Murphy          2017-01-19  335  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
c61a4633a56aaa Shaokun Zhang         2019-01-24  336  	unsigned long order, base_pfn;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  337  	struct iova_domain *iovad;
de4ba360c3e4ed John Garry            2021-07-14  338  	size_t max_opt_dma_size;
de4ba360c3e4ed John Garry            2021-07-14  339  	unsigned long iova_len = 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  340  
fdbe574eb69312 Robin Murphy          2017-01-19  341  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
fdbe574eb69312 Robin Murphy          2017-01-19  342  		return -EINVAL;
0db2e5d18f76a6 Robin Murphy          2015-10-01  343  
6b0c54e7f27159 Yunsheng Lin          2019-08-24  344  	iovad = &cookie->iovad;
6b0c54e7f27159 Yunsheng Lin          2019-08-24  345  
0db2e5d18f76a6 Robin Murphy          2015-10-01  346  	/* Use the smallest supported page size for IOVA granularity */
d16e0faab911cc Robin Murphy          2016-04-07  347  	order = __ffs(domain->pgsize_bitmap);
0db2e5d18f76a6 Robin Murphy          2015-10-01  348  	base_pfn = max_t(unsigned long, 1, base >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  349  
0db2e5d18f76a6 Robin Murphy          2015-10-01  350  	/* Check the domain allows at least some access to the device... */
0db2e5d18f76a6 Robin Murphy          2015-10-01  351  	if (domain->geometry.force_aperture) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  352  		if (base > domain->geometry.aperture_end ||
ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353  		    limit < domain->geometry.aperture_start) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  354  			pr_warn("specified DMA range outside IOMMU capability\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  355  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  356  		}
0db2e5d18f76a6 Robin Murphy          2015-10-01  357  		/* ...then finally give it a kicking to make sure it fits */
0db2e5d18f76a6 Robin Murphy          2015-10-01  358  		base_pfn = max_t(unsigned long, base_pfn,
0db2e5d18f76a6 Robin Murphy          2015-10-01  359  				domain->geometry.aperture_start >> order);
0db2e5d18f76a6 Robin Murphy          2015-10-01  360  	}
0db2e5d18f76a6 Robin Murphy          2015-10-01  361  
f51d7bb79c1124 Robin Murphy          2017-01-16  362  	/* start_pfn is always nonzero for an already-initialised domain */
0db2e5d18f76a6 Robin Murphy          2015-10-01  363  	if (iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  364  		if (1UL << order != iovad->granule ||
f51d7bb79c1124 Robin Murphy          2017-01-16  365  		    base_pfn != iovad->start_pfn) {
0db2e5d18f76a6 Robin Murphy          2015-10-01  366  			pr_warn("Incompatible range for DMA domain\n");
0db2e5d18f76a6 Robin Murphy          2015-10-01  367  			return -EFAULT;
0db2e5d18f76a6 Robin Murphy          2015-10-01  368  		}
7c1b058c8b5a31 Robin Murphy          2017-03-16  369  
7c1b058c8b5a31 Robin Murphy          2017-03-16  370  		return 0;
0db2e5d18f76a6 Robin Murphy          2015-10-01  371  	}
7c1b058c8b5a31 Robin Murphy          2017-03-16  372  
de4ba360c3e4ed John Garry            2021-07-14  373  
de4ba360c3e4ed John Garry            2021-07-14 @374  	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
                                                                                                            ^^^^^^^^^^^^^^^^
New unchecked dereference

de4ba360c3e4ed John Garry            2021-07-14  375  	if (max_opt_dma_size) {
de4ba360c3e4ed John Garry            2021-07-14  376  		unsigned long shift = __ffs(1UL << order);
de4ba360c3e4ed John Garry            2021-07-14  377  
de4ba360c3e4ed John Garry            2021-07-14  378  		iova_len = max_opt_dma_size >> shift;
de4ba360c3e4ed John Garry            2021-07-14  379  		iova_len = roundup_pow_of_two(iova_len);
de4ba360c3e4ed John Garry            2021-07-14  380  	}
de4ba360c3e4ed John Garry            2021-07-14  381  
de4ba360c3e4ed John Garry            2021-07-14  382  	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
2da274cdf998a1 Zhen Lei              2018-09-20  383  
82c3cefb9f1652 Lu Baolu              2021-02-25 @384  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
                                                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


a250c23f15c21c Robin Murphy          2021-04-01  385  	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
b34e9b0de3c411 Tom Murphy            2020-09-10  386  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
2a2b8eaa5b2566 Tom Murphy            2020-11-24  387  					  iommu_dma_entry_dtor))
b34e9b0de3c411 Tom Murphy            2020-09-10  388  			pr_warn("iova flush queue initialization failed\n");
b34e9b0de3c411 Tom Murphy            2020-09-10  389  		else
2da274cdf998a1 Zhen Lei              2018-09-20  390  			cookie->fq_domain = domain;
2da274cdf998a1 Zhen Lei              2018-09-20  391  	}
2da274cdf998a1 Zhen Lei              2018-09-20  392  
7c1b058c8b5a31 Robin Murphy          2017-03-16  393  	if (!dev)
                                                            ^^^^
Old code has checks for NULL

0db2e5d18f76a6 Robin Murphy          2015-10-01  394  		return 0;
7c1b058c8b5a31 Robin Murphy          2017-03-16  395  
7c1b058c8b5a31 Robin Murphy          2017-03-16  396  	return iova_reserve_iommu_regions(dev, domain);
0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
  2021-07-19  7:58 ` Dan Carpenter
@ 2021-07-19  9:12   ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-19  9:12 UTC (permalink / raw)
  To: Dan Carpenter, kbuild, joro, will, robin.murphy, baolu.lu
  Cc: kbuild-all, lkp, airlied, Linuxarm, jonathanh, iommu,
	thierry.reding, daniel

On 19/07/2021 08:58, Dan Carpenter wrote:
> Hi John,
> 
> url:    https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
> config: ia64-randconfig-m031-20210714 (attached as .config)
> compiler: ia64-linux-gcc (GCC) 9.3.0
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> 
> smatch warnings:
> drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable dereferenced before check 'dev' (see line 374)
> 

thanks for the notice

> vim +/dev +384 drivers/iommu/dma-iommu.c
> 
> 06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333  				 dma_addr_t limit, struct device *dev)
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
> fdbe574eb69312 Robin Murphy          2017-01-19  335  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
> c61a4633a56aaa Shaokun Zhang         2019-01-24  336  	unsigned long order, base_pfn;
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  337  	struct iova_domain *iovad;
> de4ba360c3e4ed John Garry            2021-07-14  338  	size_t max_opt_dma_size;
> de4ba360c3e4ed John Garry            2021-07-14  339  	unsigned long iova_len = 0;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  340
> fdbe574eb69312 Robin Murphy          2017-01-19  341  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
> fdbe574eb69312 Robin Murphy          2017-01-19  342  		return -EINVAL;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  343
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  344  	iovad = &cookie->iovad;
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  345
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  346  	/* Use the smallest supported page size for IOVA granularity */
> d16e0faab911cc Robin Murphy          2016-04-07  347  	order = __ffs(domain->pgsize_bitmap);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  348  	base_pfn = max_t(unsigned long, 1, base >> order);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  349
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  350  	/* Check the domain allows at least some access to the device... */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  351  	if (domain->geometry.force_aperture) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  352  		if (base > domain->geometry.aperture_end ||
> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353  		    limit < domain->geometry.aperture_start) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  354  			pr_warn("specified DMA range outside IOMMU capability\n");
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  355  			return -EFAULT;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  356  		}
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  357  		/* ...then finally give it a kicking to make sure it fits */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  358  		base_pfn = max_t(unsigned long, base_pfn,
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  359  				domain->geometry.aperture_start >> order);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  360  	}
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  361
> f51d7bb79c1124 Robin Murphy          2017-01-16  362  	/* start_pfn is always nonzero for an already-initialised domain */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  363  	if (iovad->start_pfn) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  364  		if (1UL << order != iovad->granule ||
> f51d7bb79c1124 Robin Murphy          2017-01-16  365  		    base_pfn != iovad->start_pfn) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  366  			pr_warn("Incompatible range for DMA domain\n");
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  367  			return -EFAULT;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  368  		}
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  369
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  370  		return 0;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  371  	}
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  372
> de4ba360c3e4ed John Garry            2021-07-14  373
> de4ba360c3e4ed John Garry            2021-07-14 @374  	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
>                                                                                                              ^^^^^^^^^^^^^^^^
> New unchecked dereference
> 
> de4ba360c3e4ed John Garry            2021-07-14  375  	if (max_opt_dma_size) {
> de4ba360c3e4ed John Garry            2021-07-14  376  		unsigned long shift = __ffs(1UL << order);
> de4ba360c3e4ed John Garry            2021-07-14  377
> de4ba360c3e4ed John Garry            2021-07-14  378  		iova_len = max_opt_dma_size >> shift;
> de4ba360c3e4ed John Garry            2021-07-14  379  		iova_len = roundup_pow_of_two(iova_len);
> de4ba360c3e4ed John Garry            2021-07-14  380  	}
> de4ba360c3e4ed John Garry            2021-07-14  381
> de4ba360c3e4ed John Garry            2021-07-14  382  	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
> 2da274cdf998a1 Zhen Lei              2018-09-20  383
> 82c3cefb9f1652 Lu Baolu              2021-02-25 @384  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
>                                                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> 
> a250c23f15c21c Robin Murphy          2021-04-01  385  	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
> b34e9b0de3c411 Tom Murphy            2020-09-10  386  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
> 2a2b8eaa5b2566 Tom Murphy            2020-11-24  387  					  iommu_dma_entry_dtor))
> b34e9b0de3c411 Tom Murphy            2020-09-10  388  			pr_warn("iova flush queue initialization failed\n");
> b34e9b0de3c411 Tom Murphy            2020-09-10  389  		else
> 2da274cdf998a1 Zhen Lei              2018-09-20  390  			cookie->fq_domain = domain;
> 2da274cdf998a1 Zhen Lei              2018-09-20  391  	}
> 2da274cdf998a1 Zhen Lei              2018-09-20  392
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393  	if (!dev)
>                                                              ^^^^
> Old code has checks for NULL
> 

I doubt that in practice we need this check.

Function iommu_dma_init_domain() is only called by 
iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
for when we call iommu_dma_init_domain() there. As such, the dev == NULL 
checks in iommu_dma_init_domain() are effectively redundant.


> 0db2e5d18f76a6 Robin Murphy          2015-10-01  394  		return 0;
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  395
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  396  	return iova_reserve_iommu_regions(dev, domain);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
> 
> .
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-19  9:12   ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-19  9:12 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 7367 bytes --]

On 19/07/2021 08:58, Dan Carpenter wrote:
> Hi John,
> 
> url:    https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
> config: ia64-randconfig-m031-20210714 (attached as .config)
> compiler: ia64-linux-gcc (GCC) 9.3.0
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> 
> smatch warnings:
> drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable dereferenced before check 'dev' (see line 374)
> 

thanks for the notice

> vim +/dev +384 drivers/iommu/dma-iommu.c
> 
> 06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333  				 dma_addr_t limit, struct device *dev)
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
> fdbe574eb69312 Robin Murphy          2017-01-19  335  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
> c61a4633a56aaa Shaokun Zhang         2019-01-24  336  	unsigned long order, base_pfn;
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  337  	struct iova_domain *iovad;
> de4ba360c3e4ed John Garry            2021-07-14  338  	size_t max_opt_dma_size;
> de4ba360c3e4ed John Garry            2021-07-14  339  	unsigned long iova_len = 0;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  340
> fdbe574eb69312 Robin Murphy          2017-01-19  341  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
> fdbe574eb69312 Robin Murphy          2017-01-19  342  		return -EINVAL;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  343
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  344  	iovad = &cookie->iovad;
> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  345
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  346  	/* Use the smallest supported page size for IOVA granularity */
> d16e0faab911cc Robin Murphy          2016-04-07  347  	order = __ffs(domain->pgsize_bitmap);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  348  	base_pfn = max_t(unsigned long, 1, base >> order);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  349
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  350  	/* Check the domain allows at least some access to the device... */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  351  	if (domain->geometry.force_aperture) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  352  		if (base > domain->geometry.aperture_end ||
> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353  		    limit < domain->geometry.aperture_start) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  354  			pr_warn("specified DMA range outside IOMMU capability\n");
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  355  			return -EFAULT;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  356  		}
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  357  		/* ...then finally give it a kicking to make sure it fits */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  358  		base_pfn = max_t(unsigned long, base_pfn,
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  359  				domain->geometry.aperture_start >> order);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  360  	}
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  361
> f51d7bb79c1124 Robin Murphy          2017-01-16  362  	/* start_pfn is always nonzero for an already-initialised domain */
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  363  	if (iovad->start_pfn) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  364  		if (1UL << order != iovad->granule ||
> f51d7bb79c1124 Robin Murphy          2017-01-16  365  		    base_pfn != iovad->start_pfn) {
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  366  			pr_warn("Incompatible range for DMA domain\n");
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  367  			return -EFAULT;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  368  		}
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  369
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  370  		return 0;
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  371  	}
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  372
> de4ba360c3e4ed John Garry            2021-07-14  373
> de4ba360c3e4ed John Garry            2021-07-14 @374  	max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
>                                                                                                              ^^^^^^^^^^^^^^^^
> New unchecked dereference
> 
> de4ba360c3e4ed John Garry            2021-07-14  375  	if (max_opt_dma_size) {
> de4ba360c3e4ed John Garry            2021-07-14  376  		unsigned long shift = __ffs(1UL << order);
> de4ba360c3e4ed John Garry            2021-07-14  377
> de4ba360c3e4ed John Garry            2021-07-14  378  		iova_len = max_opt_dma_size >> shift;
> de4ba360c3e4ed John Garry            2021-07-14  379  		iova_len = roundup_pow_of_two(iova_len);
> de4ba360c3e4ed John Garry            2021-07-14  380  	}
> de4ba360c3e4ed John Garry            2021-07-14  381
> de4ba360c3e4ed John Garry            2021-07-14  382  	init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
> 2da274cdf998a1 Zhen Lei              2018-09-20  383
> 82c3cefb9f1652 Lu Baolu              2021-02-25 @384  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
>                                                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> 
> a250c23f15c21c Robin Murphy          2021-04-01  385  	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
> b34e9b0de3c411 Tom Murphy            2020-09-10  386  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
> 2a2b8eaa5b2566 Tom Murphy            2020-11-24  387  					  iommu_dma_entry_dtor))
> b34e9b0de3c411 Tom Murphy            2020-09-10  388  			pr_warn("iova flush queue initialization failed\n");
> b34e9b0de3c411 Tom Murphy            2020-09-10  389  		else
> 2da274cdf998a1 Zhen Lei              2018-09-20  390  			cookie->fq_domain = domain;
> 2da274cdf998a1 Zhen Lei              2018-09-20  391  	}
> 2da274cdf998a1 Zhen Lei              2018-09-20  392
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393  	if (!dev)
>                                                              ^^^^
> Old code has checks for NULL
> 

I doubt that in practice we need this check.

Function iommu_dma_init_domain() is only called by 
iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
for when we call iommu_dma_init_domain() there. As such, the dev == NULL 
checks in iommu_dma_init_domain() are effectively redundant.


> 0db2e5d18f76a6 Robin Murphy          2015-10-01  394  		return 0;
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  395
> 7c1b058c8b5a31 Robin Murphy          2017-03-16  396  	return iova_reserve_iommu_regions(dev, domain);
> 0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
> 
> .
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
  2021-07-19  9:12   ` John Garry
@ 2021-07-19  9:32     ` Robin Murphy
  -1 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-07-19  9:32 UTC (permalink / raw)
  To: John Garry, Dan Carpenter, kbuild, joro, will, baolu.lu
  Cc: kbuild-all, lkp, airlied, Linuxarm, jonathanh, iommu,
	thierry.reding, daniel

On 2021-07-19 10:12, John Garry wrote:
> On 19/07/2021 08:58, Dan Carpenter wrote:
>> Hi John,
>>
>> url:    
>> https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328 
>>
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 
>> next
>> config: ia64-randconfig-m031-20210714 (attached as .config)
>> compiler: ia64-linux-gcc (GCC) 9.3.0
>>
>> If you fix the issue, kindly add following tag as appropriate
>> Reported-by: kernel test robot <lkp@intel.com>
>> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
>>
>> smatch warnings:
>> drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable 
>> dereferenced before check 'dev' (see line 374)
>>
> 
> thanks for the notice
> 
>> vim +/dev +384 drivers/iommu/dma-iommu.c
>>
>> 06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int 
>> iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333                   
>> dma_addr_t limit, struct device *dev)
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
>> fdbe574eb69312 Robin Murphy          2017-01-19  335      struct 
>> iommu_dma_cookie *cookie = domain->iova_cookie;
>> c61a4633a56aaa Shaokun Zhang         2019-01-24  336      unsigned 
>> long order, base_pfn;
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  337      struct 
>> iova_domain *iovad;
>> de4ba360c3e4ed John Garry            2021-07-14  338      size_t 
>> max_opt_dma_size;
>> de4ba360c3e4ed John Garry            2021-07-14  339      unsigned 
>> long iova_len = 0;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  340
>> fdbe574eb69312 Robin Murphy          2017-01-19  341      if (!cookie 
>> || cookie->type != IOMMU_DMA_IOVA_COOKIE)
>> fdbe574eb69312 Robin Murphy          2017-01-19  342          return 
>> -EINVAL;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  343
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  344      iovad = 
>> &cookie->iovad;
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  345
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  346      /* Use the 
>> smallest supported page size for IOVA granularity */
>> d16e0faab911cc Robin Murphy          2016-04-07  347      order = 
>> __ffs(domain->pgsize_bitmap);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  348      base_pfn = 
>> max_t(unsigned long, 1, base >> order);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  349
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  350      /* Check the 
>> domain allows at least some access to the device... */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  351      if 
>> (domain->geometry.force_aperture) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  352          if (base 
>> > domain->geometry.aperture_end ||
>> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353              
>> limit < domain->geometry.aperture_start) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  354              
>> pr_warn("specified DMA range outside IOMMU capability\n");
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  355              
>> return -EFAULT;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  356          }
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  357          /* 
>> ...then finally give it a kicking to make sure it fits */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  358          base_pfn 
>> = max_t(unsigned long, base_pfn,
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  359                  
>> domain->geometry.aperture_start >> order);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  360      }
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  361
>> f51d7bb79c1124 Robin Murphy          2017-01-16  362      /* start_pfn 
>> is always nonzero for an already-initialised domain */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  363      if 
>> (iovad->start_pfn) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  364          if (1UL 
>> << order != iovad->granule ||
>> f51d7bb79c1124 Robin Murphy          2017-01-16  365              
>> base_pfn != iovad->start_pfn) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  366              
>> pr_warn("Incompatible range for DMA domain\n");
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  367              
>> return -EFAULT;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  368          }
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  369
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  370          return 0;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  371      }
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  372
>> de4ba360c3e4ed John Garry            2021-07-14  373
>> de4ba360c3e4ed John Garry            2021-07-14 @374      
>> max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
>>                                                                                                              
>> ^^^^^^^^^^^^^^^^
>> New unchecked dereference
>>
>> de4ba360c3e4ed John Garry            2021-07-14  375      if 
>> (max_opt_dma_size) {
>> de4ba360c3e4ed John Garry            2021-07-14  376          unsigned 
>> long shift = __ffs(1UL << order);
>> de4ba360c3e4ed John Garry            2021-07-14  377
>> de4ba360c3e4ed John Garry            2021-07-14  378          iova_len 
>> = max_opt_dma_size >> shift;
>> de4ba360c3e4ed John Garry            2021-07-14  379          iova_len 
>> = roundup_pow_of_two(iova_len);
>> de4ba360c3e4ed John Garry            2021-07-14  380      }
>> de4ba360c3e4ed John Garry            2021-07-14  381
>> de4ba360c3e4ed John Garry            2021-07-14  382      
>> init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
>> 2da274cdf998a1 Zhen Lei              2018-09-20  383
>> 82c3cefb9f1652 Lu Baolu              2021-02-25 @384      if 
>> (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
>>                                                                                      
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>
>>
>> a250c23f15c21c Robin Murphy          2021-04-01  385          
>> domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
>> b34e9b0de3c411 Tom Murphy            2020-09-10  386          if 
>> (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
>> 2a2b8eaa5b2566 Tom Murphy            2020-11-24  
>> 387                        iommu_dma_entry_dtor))
>> b34e9b0de3c411 Tom Murphy            2020-09-10  388              
>> pr_warn("iova flush queue initialization failed\n");
>> b34e9b0de3c411 Tom Murphy            2020-09-10  389          else
>> 2da274cdf998a1 Zhen Lei              2018-09-20  390              
>> cookie->fq_domain = domain;
>> 2da274cdf998a1 Zhen Lei              2018-09-20  391      }
>> 2da274cdf998a1 Zhen Lei              2018-09-20  392
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393      if (!dev)
>>                                                              ^^^^
>> Old code has checks for NULL
>>
> 
> I doubt that in practice we need this check.
> 
> Function iommu_dma_init_domain() is only called by 
> iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
> iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
> for when we call iommu_dma_init_domain() there. As such, the dev == NULL 
> checks in iommu_dma_init_domain() are effectively redundant.

Indeed, I have a patch for that in the stack I'm preparing:

https://gitlab.arm.com/linux-arm/linux-rm/-/commit/9b6cf2a214107c153ee278b1664f688888d7328f

Robin.

>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  394          return 0;
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  395
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  396      return 
>> iova_reserve_iommu_regions(dev, domain);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }
>>
>> ---
>> 0-DAY CI Kernel Test Service, Intel Corporation
>> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
>>
>> .
>>
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-19  9:32     ` Robin Murphy
  0 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-07-19  9:32 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 9193 bytes --]

On 2021-07-19 10:12, John Garry wrote:
> On 19/07/2021 08:58, Dan Carpenter wrote:
>> Hi John,
>>
>> url:    
>> https://github.com/0day-ci/linux/commits/John-Garry/iommu-Allow-IOVA-rcache-range-be-configured/20210714-184328 
>>
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 
>> next
>> config: ia64-randconfig-m031-20210714 (attached as .config)
>> compiler: ia64-linux-gcc (GCC) 9.3.0
>>
>> If you fix the issue, kindly add following tag as appropriate
>> Reported-by: kernel test robot <lkp@intel.com>
>> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
>>
>> smatch warnings:
>> drivers/iommu/dma-iommu.c:384 iommu_dma_init_domain() warn: variable 
>> dereferenced before check 'dev' (see line 374)
>>
> 
> thanks for the notice
> 
>> vim +/dev +384 drivers/iommu/dma-iommu.c
>>
>> 06d60728ff5c01 Christoph Hellwig     2019-05-20  332  static int 
>> iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  333                   
>> dma_addr_t limit, struct device *dev)
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  334  {
>> fdbe574eb69312 Robin Murphy          2017-01-19  335      struct 
>> iommu_dma_cookie *cookie = domain->iova_cookie;
>> c61a4633a56aaa Shaokun Zhang         2019-01-24  336      unsigned 
>> long order, base_pfn;
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  337      struct 
>> iova_domain *iovad;
>> de4ba360c3e4ed John Garry            2021-07-14  338      size_t 
>> max_opt_dma_size;
>> de4ba360c3e4ed John Garry            2021-07-14  339      unsigned 
>> long iova_len = 0;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  340
>> fdbe574eb69312 Robin Murphy          2017-01-19  341      if (!cookie 
>> || cookie->type != IOMMU_DMA_IOVA_COOKIE)
>> fdbe574eb69312 Robin Murphy          2017-01-19  342          return 
>> -EINVAL;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  343
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  344      iovad = 
>> &cookie->iovad;
>> 6b0c54e7f27159 Yunsheng Lin          2019-08-24  345
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  346      /* Use the 
>> smallest supported page size for IOVA granularity */
>> d16e0faab911cc Robin Murphy          2016-04-07  347      order = 
>> __ffs(domain->pgsize_bitmap);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  348      base_pfn = 
>> max_t(unsigned long, 1, base >> order);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  349
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  350      /* Check the 
>> domain allows at least some access to the device... */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  351      if 
>> (domain->geometry.force_aperture) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  352          if (base 
>> > domain->geometry.aperture_end ||
>> ac6d704679d343 Jean-Philippe Brucker 2021-06-18  353              
>> limit < domain->geometry.aperture_start) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  354              
>> pr_warn("specified DMA range outside IOMMU capability\n");
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  355              
>> return -EFAULT;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  356          }
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  357          /* 
>> ...then finally give it a kicking to make sure it fits */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  358          base_pfn 
>> = max_t(unsigned long, base_pfn,
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  359                  
>> domain->geometry.aperture_start >> order);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  360      }
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  361
>> f51d7bb79c1124 Robin Murphy          2017-01-16  362      /* start_pfn 
>> is always nonzero for an already-initialised domain */
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  363      if 
>> (iovad->start_pfn) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  364          if (1UL 
>> << order != iovad->granule ||
>> f51d7bb79c1124 Robin Murphy          2017-01-16  365              
>> base_pfn != iovad->start_pfn) {
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  366              
>> pr_warn("Incompatible range for DMA domain\n");
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  367              
>> return -EFAULT;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  368          }
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  369
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  370          return 0;
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  371      }
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  372
>> de4ba360c3e4ed John Garry            2021-07-14  373
>> de4ba360c3e4ed John Garry            2021-07-14 @374      
>> max_opt_dma_size = iommu_group_get_max_opt_dma_size(dev->iommu_group);
>>                                                                                                              
>> ^^^^^^^^^^^^^^^^
>> New unchecked dereference
>>
>> de4ba360c3e4ed John Garry            2021-07-14  375      if 
>> (max_opt_dma_size) {
>> de4ba360c3e4ed John Garry            2021-07-14  376          unsigned 
>> long shift = __ffs(1UL << order);
>> de4ba360c3e4ed John Garry            2021-07-14  377
>> de4ba360c3e4ed John Garry            2021-07-14  378          iova_len 
>> = max_opt_dma_size >> shift;
>> de4ba360c3e4ed John Garry            2021-07-14  379          iova_len 
>> = roundup_pow_of_two(iova_len);
>> de4ba360c3e4ed John Garry            2021-07-14  380      }
>> de4ba360c3e4ed John Garry            2021-07-14  381
>> de4ba360c3e4ed John Garry            2021-07-14  382      
>> init_iova_domain(iovad, 1UL << order, base_pfn, iova_len);
>> 2da274cdf998a1 Zhen Lei              2018-09-20  383
>> 82c3cefb9f1652 Lu Baolu              2021-02-25 @384      if 
>> (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
>>                                                                                      
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>
>>
>> a250c23f15c21c Robin Murphy          2021-04-01  385          
>> domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
>> b34e9b0de3c411 Tom Murphy            2020-09-10  386          if 
>> (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
>> 2a2b8eaa5b2566 Tom Murphy            2020-11-24  
>> 387                        iommu_dma_entry_dtor))
>> b34e9b0de3c411 Tom Murphy            2020-09-10  388              
>> pr_warn("iova flush queue initialization failed\n");
>> b34e9b0de3c411 Tom Murphy            2020-09-10  389          else
>> 2da274cdf998a1 Zhen Lei              2018-09-20  390              
>> cookie->fq_domain = domain;
>> 2da274cdf998a1 Zhen Lei              2018-09-20  391      }
>> 2da274cdf998a1 Zhen Lei              2018-09-20  392
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393      if (!dev)
>>                                                              ^^^^
>> Old code has checks for NULL
>>
> 
> I doubt that in practice we need this check.
> 
> Function iommu_dma_init_domain() is only called by 
> iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
> iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
> for when we call iommu_dma_init_domain() there. As such, the dev == NULL 
> checks in iommu_dma_init_domain() are effectively redundant.

Indeed, I have a patch for that in the stack I'm preparing:

https://gitlab.arm.com/linux-arm/linux-rm/-/commit/9b6cf2a214107c153ee278b1664f688888d7328f

Robin.

>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  394          return 0;
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  395
>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  396      return 
>> iova_reserve_iommu_regions(dev, domain);
>> 0db2e5d18f76a6 Robin Murphy          2015-10-01  397  }
>>
>> ---
>> 0-DAY CI Kernel Test Service, Intel Corporation
>> https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
>>
>> .
>>
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
  2021-07-19  9:32     ` Robin Murphy
@ 2021-07-19 10:45       ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-19 10:45 UTC (permalink / raw)
  To: Robin Murphy, Dan Carpenter, kbuild, joro, will, baolu.lu
  Cc: kbuild-all, lkp, airlied, Linuxarm, jonathanh, iommu,
	thierry.reding, daniel

On 19/07/2021 10:32, Robin Murphy wrote:
>>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393      if (!dev)
>>>                                                              ^^^^
>>> Old code has checks for NULL
>>>
>>
>> I doubt that in practice we need this check.
>>
>> Function iommu_dma_init_domain() is only called by 
>> iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
>> iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
>> for when we call iommu_dma_init_domain() there. As such, the dev == 
>> NULL checks in iommu_dma_init_domain() are effectively redundant.
> 
> Indeed, I have a patch for that in the stack I'm preparing:
> 
> https://gitlab.arm.com/linux-arm/linux-rm/-/commit/9b6cf2a214107c153ee278b1664f688888d7328f

Cool, so how about please checking this series when you get a chance? 
You did suggest this approach after all..

Thanks,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init
@ 2021-07-19 10:45       ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-07-19 10:45 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 998 bytes --]

On 19/07/2021 10:32, Robin Murphy wrote:
>>> 7c1b058c8b5a31 Robin Murphy          2017-03-16  393      if (!dev)
>>>                                                              ^^^^
>>> Old code has checks for NULL
>>>
>>
>> I doubt that in practice we need this check.
>>
>> Function iommu_dma_init_domain() is only called by 
>> iommu_setup_dma_ops(). Furthermore, iommu_setup_dma_ops() calls 
>> iommu_get_domain_for_dev(dev), which cannot safely handle dev == NULL 
>> for when we call iommu_dma_init_domain() there. As such, the dev == 
>> NULL checks in iommu_dma_init_domain() are effectively redundant.
> 
> Indeed, I have a patch for that in the stack I'm preparing:
> 
> https://gitlab.arm.com/linux-arm/linux-rm/-/commit/9b6cf2a214107c153ee278b1664f688888d7328f

Cool, so how about please checking this series when you get a chance? 
You did suggest this approach after all..

Thanks,
John

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 1/6] iommu: Refactor iommu_group_store_type()
  2021-07-14 10:36   ` John Garry
@ 2021-08-02 14:46     ` Will Deacon
  -1 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 14:46 UTC (permalink / raw)
  To: John Garry
  Cc: joro, robin.murphy, baolu.lu, iommu, linuxarm, thierry.reding,
	airlied, daniel, jonathanh, sakari.ailus, bingbu.cao,
	tian.shu.qiu, mchehab, gregkh, digetx, mst, jasowang,
	linux-kernel, chenxiang66

On Wed, Jul 14, 2021 at 06:36:38PM +0800, John Garry wrote:
> Function iommu_group_store_type() supports changing the default domain
> of an IOMMU group.
> 
> Many conditions need to be satisfied and steps taken for this action to be
> successful.
> 
> Satisfying these conditions and steps will be required for setting other
> IOMMU group attributes, so factor into a common part and a part specific
> to update the IOMMU group attribute.
> 
> No functional change intended.
> 
> Some code comments are tidied up also.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
>  drivers/iommu/iommu.c | 73 +++++++++++++++++++++++++++----------------
>  1 file changed, 46 insertions(+), 27 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Although likely to conflict with Robin's monster series.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 1/6] iommu: Refactor iommu_group_store_type()
@ 2021-08-02 14:46     ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 14:46 UTC (permalink / raw)
  To: John Garry
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, robin.murphy, jasowang, tian.shu.qiu

On Wed, Jul 14, 2021 at 06:36:38PM +0800, John Garry wrote:
> Function iommu_group_store_type() supports changing the default domain
> of an IOMMU group.
> 
> Many conditions need to be satisfied and steps taken for this action to be
> successful.
> 
> Satisfying these conditions and steps will be required for setting other
> IOMMU group attributes, so factor into a common part and a part specific
> to update the IOMMU group attribute.
> 
> No functional change intended.
> 
> Some code comments are tidied up also.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
>  drivers/iommu/iommu.c | 73 +++++++++++++++++++++++++++----------------
>  1 file changed, 46 insertions(+), 27 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Although likely to conflict with Robin's monster series.

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
  2021-07-14 10:36   ` John Garry
@ 2021-08-02 15:01     ` Will Deacon
  -1 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 15:01 UTC (permalink / raw)
  To: John Garry
  Cc: joro, robin.murphy, baolu.lu, iommu, linuxarm, thierry.reding,
	airlied, daniel, jonathanh, sakari.ailus, bingbu.cao,
	tian.shu.qiu, mchehab, gregkh, digetx, mst, jasowang,
	linux-kernel, chenxiang66

On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
> current rcache upper limit.

What's an LLD?

> This means that allocations for those IOVAs will never be cached, and
> always must be allocated and freed from the RB tree per DMA mapping cycle.
> This has a significant effect on performance, more so since commit
> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
> fails"), as discussed at [0].
> 
> As a first step towards allowing the rcache range upper limit be
> configured, hold this value in the IOVA rcache structure, and allocate
> the rcaches separately.
> 
> [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
>  drivers/iommu/dma-iommu.c |  2 +-
>  drivers/iommu/iova.c      | 23 +++++++++++++++++------
>  include/linux/iova.h      |  4 ++--
>  3 files changed, 20 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 98ba927aee1a..4772278aa5da 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>  	 * rounding up anything cacheable to make sure that can't happen. The
>  	 * order of the unadjusted size will still match upon freeing.
>  	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> +	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>  		iova_len = roundup_pow_of_two(iova_len);
>  
>  	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index b6cf5f16123b..07ce73fdd8c1 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -15,6 +15,8 @@
>  /* The anchor node sits above the top of the usable address space */
>  #define IOVA_ANCHOR	~0UL
>  
> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */

Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?

> +
>  static bool iova_rcache_insert(struct iova_domain *iovad,
>  			       unsigned long pfn,
>  			       unsigned long size);
> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
>  	unsigned int cpu;
>  	int i;
>  
> -	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
> +	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
> +
> +	iovad->rcaches = kcalloc(iovad->rcache_max_size,
> +				 sizeof(*iovad->rcaches), GFP_KERNEL);
> +	if (!iovad->rcaches)
> +		return;

Returning quietly here doesn't seem like the right thing to do. At least, I
don't think the rest of the functions here are checking rcaches against
NULL.

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
@ 2021-08-02 15:01     ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 15:01 UTC (permalink / raw)
  To: John Garry
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, robin.murphy, jasowang, tian.shu.qiu

On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
> current rcache upper limit.

What's an LLD?

> This means that allocations for those IOVAs will never be cached, and
> always must be allocated and freed from the RB tree per DMA mapping cycle.
> This has a significant effect on performance, more so since commit
> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
> fails"), as discussed at [0].
> 
> As a first step towards allowing the rcache range upper limit be
> configured, hold this value in the IOVA rcache structure, and allocate
> the rcaches separately.
> 
> [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
>  drivers/iommu/dma-iommu.c |  2 +-
>  drivers/iommu/iova.c      | 23 +++++++++++++++++------
>  include/linux/iova.h      |  4 ++--
>  3 files changed, 20 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 98ba927aee1a..4772278aa5da 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>  	 * rounding up anything cacheable to make sure that can't happen. The
>  	 * order of the unadjusted size will still match upon freeing.
>  	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> +	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>  		iova_len = roundup_pow_of_two(iova_len);
>  
>  	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index b6cf5f16123b..07ce73fdd8c1 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -15,6 +15,8 @@
>  /* The anchor node sits above the top of the usable address space */
>  #define IOVA_ANCHOR	~0UL
>  
> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */

Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?

> +
>  static bool iova_rcache_insert(struct iova_domain *iovad,
>  			       unsigned long pfn,
>  			       unsigned long size);
> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
>  	unsigned int cpu;
>  	int i;
>  
> -	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
> +	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
> +
> +	iovad->rcaches = kcalloc(iovad->rcache_max_size,
> +				 sizeof(*iovad->rcaches), GFP_KERNEL);
> +	if (!iovad->rcaches)
> +		return;

Returning quietly here doesn't seem like the right thing to do. At least, I
don't think the rest of the functions here are checking rcaches against
NULL.

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-07-14 10:36   ` John Garry
@ 2021-08-02 15:06     ` Will Deacon
  -1 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 15:06 UTC (permalink / raw)
  To: John Garry
  Cc: joro, robin.murphy, baolu.lu, iommu, linuxarm, thierry.reding,
	airlied, daniel, jonathanh, sakari.ailus, bingbu.cao,
	tian.shu.qiu, mchehab, gregkh, digetx, mst, jasowang,
	linux-kernel, chenxiang66

On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
> Add max opt argument to init_iova_domain(), and use it to set the rcaches
> range.
> 
> Also fix up all users to set this value (at 0, meaning use default).

Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?

Will

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-08-02 15:06     ` Will Deacon
  0 siblings, 0 replies; 43+ messages in thread
From: Will Deacon @ 2021-08-02 15:06 UTC (permalink / raw)
  To: John Garry
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, robin.murphy, jasowang, tian.shu.qiu

On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
> Add max opt argument to init_iova_domain(), and use it to set the rcaches
> range.
> 
> Also fix up all users to set this value (at 0, meaning use default).

Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
  2021-08-02 15:01     ` Will Deacon
@ 2021-08-02 15:23       ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 15:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: joro, robin.murphy, baolu.lu, iommu, linuxarm, thierry.reding,
	airlied, daniel, jonathanh, sakari.ailus, bingbu.cao,
	tian.shu.qiu, mchehab, gregkh, digetx, mst, jasowang,
	linux-kernel, chenxiang66

On 02/08/2021 16:01, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
>> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
>> current rcache upper limit.
> 
> What's an LLD?
> 

low-level driver

maybe I'll stick with simply "drivers"

>> This means that allocations for those IOVAs will never be cached, and
>> always must be allocated and freed from the RB tree per DMA mapping cycle.
>> This has a significant effect on performance, more so since commit
>> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
>> fails"), as discussed at [0].
>>
>> As a first step towards allowing the rcache range upper limit be
>> configured, hold this value in the IOVA rcache structure, and allocate
>> the rcaches separately.
>>
>> [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
>>
>> Signed-off-by: John Garry <john.garry@huawei.com>
>> ---
>>   drivers/iommu/dma-iommu.c |  2 +-
>>   drivers/iommu/iova.c      | 23 +++++++++++++++++------
>>   include/linux/iova.h      |  4 ++--
>>   3 files changed, 20 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 98ba927aee1a..4772278aa5da 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>>   	 * rounding up anything cacheable to make sure that can't happen. The
>>   	 * order of the unadjusted size will still match upon freeing.
>>   	 */
>> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
>> +	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>>   		iova_len = roundup_pow_of_two(iova_len);
>>   
>>   	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index b6cf5f16123b..07ce73fdd8c1 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -15,6 +15,8 @@
>>   /* The anchor node sits above the top of the usable address space */
>>   #define IOVA_ANCHOR	~0UL
>>   
>> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */
> 
> Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?

Yeah, that may be better. I was just using the same name as before.

> 
>> +
>>   static bool iova_rcache_insert(struct iova_domain *iovad,
>>   			       unsigned long pfn,
>>   			       unsigned long size);
>> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
>>   	unsigned int cpu;
>>   	int i;
>>   
>> -	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>> +	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
>> +
>> +	iovad->rcaches = kcalloc(iovad->rcache_max_size,
>> +				 sizeof(*iovad->rcaches), GFP_KERNEL);
>> +	if (!iovad->rcaches)
>> +		return;
> 
> Returning quietly here doesn't seem like the right thing to do. At least, I
> don't think the rest of the functions here are checking rcaches against
> NULL.
> 

For sure, but that is what other code which can fail here already does, 
like:

static void init_iova_rcaches(struct iova_domain *iovad)
{
	...

	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
		...

		rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), 
cache_line_size());
		if (WARN_ON(!rcache->cpu_rcaches))
			continue;
}

and that is not safe either.

This issue was raised a while ago. I don't mind trying to fix it - a 
slightly painful part is that it touches a few subsystems.

Thanks,
John

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
@ 2021-08-02 15:23       ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 15:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, robin.murphy, jasowang, tian.shu.qiu

On 02/08/2021 16:01, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
>> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
>> current rcache upper limit.
> 
> What's an LLD?
> 

low-level driver

maybe I'll stick with simply "drivers"

>> This means that allocations for those IOVAs will never be cached, and
>> always must be allocated and freed from the RB tree per DMA mapping cycle.
>> This has a significant effect on performance, more so since commit
>> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
>> fails"), as discussed at [0].
>>
>> As a first step towards allowing the rcache range upper limit be
>> configured, hold this value in the IOVA rcache structure, and allocate
>> the rcaches separately.
>>
>> [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
>>
>> Signed-off-by: John Garry <john.garry@huawei.com>
>> ---
>>   drivers/iommu/dma-iommu.c |  2 +-
>>   drivers/iommu/iova.c      | 23 +++++++++++++++++------
>>   include/linux/iova.h      |  4 ++--
>>   3 files changed, 20 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 98ba927aee1a..4772278aa5da 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>>   	 * rounding up anything cacheable to make sure that can't happen. The
>>   	 * order of the unadjusted size will still match upon freeing.
>>   	 */
>> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
>> +	if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>>   		iova_len = roundup_pow_of_two(iova_len);
>>   
>>   	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index b6cf5f16123b..07ce73fdd8c1 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -15,6 +15,8 @@
>>   /* The anchor node sits above the top of the usable address space */
>>   #define IOVA_ANCHOR	~0UL
>>   
>> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA range size (in pages) */
> 
> Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?

Yeah, that may be better. I was just using the same name as before.

> 
>> +
>>   static bool iova_rcache_insert(struct iova_domain *iovad,
>>   			       unsigned long pfn,
>>   			       unsigned long size);
>> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain *iovad)
>>   	unsigned int cpu;
>>   	int i;
>>   
>> -	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>> +	iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
>> +
>> +	iovad->rcaches = kcalloc(iovad->rcache_max_size,
>> +				 sizeof(*iovad->rcaches), GFP_KERNEL);
>> +	if (!iovad->rcaches)
>> +		return;
> 
> Returning quietly here doesn't seem like the right thing to do. At least, I
> don't think the rest of the functions here are checking rcaches against
> NULL.
> 

For sure, but that is what other code which can fail here already does, 
like:

static void init_iova_rcaches(struct iova_domain *iovad)
{
	...

	for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
		...

		rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), 
cache_line_size());
		if (WARN_ON(!rcache->cpu_rcaches))
			continue;
}

and that is not safe either.

This issue was raised a while ago. I don't mind trying to fix it - a 
slightly painful part is that it touches a few subsystems.

Thanks,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-08-02 15:06     ` Will Deacon
@ 2021-08-02 16:06       ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 16:06 UTC (permalink / raw)
  To: Will Deacon
  Cc: joro, robin.murphy, baolu.lu, iommu, linuxarm, thierry.reding,
	airlied, daniel, jonathanh, sakari.ailus, bingbu.cao,
	tian.shu.qiu, mchehab, gregkh, digetx, mst, jasowang,
	linux-kernel, chenxiang66

On 02/08/2021 16:06, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>> Add max opt argument to init_iova_domain(), and use it to set the rcaches
>> range.
>>
>> Also fix up all users to set this value (at 0, meaning use default).
> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?

Sure, I can do something like that. I actually did have separate along 
those lines in v3 before I decided to change it.

Thanks,
John

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-08-02 16:06       ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 16:06 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, robin.murphy, jasowang, tian.shu.qiu

On 02/08/2021 16:06, Will Deacon wrote:
> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>> Add max opt argument to init_iova_domain(), and use it to set the rcaches
>> range.
>>
>> Also fix up all users to set this value (at 0, meaning use default).
> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?

Sure, I can do something like that. I actually did have separate along 
those lines in v3 before I decided to change it.

Thanks,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
  2021-08-02 15:23       ` John Garry
@ 2021-08-02 16:09         ` Robin Murphy
  -1 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-08-02 16:09 UTC (permalink / raw)
  To: John Garry, Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, jasowang, tian.shu.qiu

On 2021-08-02 16:23, John Garry wrote:
> On 02/08/2021 16:01, Will Deacon wrote:
>> On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
>>> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
>>> current rcache upper limit.
>>
>> What's an LLD?
>>
> 
> low-level driver
> 
> maybe I'll stick with simply "drivers"
> 
>>> This means that allocations for those IOVAs will never be cached, and
>>> always must be allocated and freed from the RB tree per DMA mapping 
>>> cycle.
>>> This has a significant effect on performance, more so since commit
>>> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
>>> fails"), as discussed at [0].
>>>
>>> As a first step towards allowing the rcache range upper limit be
>>> configured, hold this value in the IOVA rcache structure, and allocate
>>> the rcaches separately.
>>>
>>> [0] 
>>> https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ 
>>>
>>>
>>> Signed-off-by: John Garry <john.garry@huawei.com>
>>> ---
>>>   drivers/iommu/dma-iommu.c |  2 +-
>>>   drivers/iommu/iova.c      | 23 +++++++++++++++++------
>>>   include/linux/iova.h      |  4 ++--
>>>   3 files changed, 20 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 98ba927aee1a..4772278aa5da 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct 
>>> iommu_domain *domain,
>>>        * rounding up anything cacheable to make sure that can't 
>>> happen. The
>>>        * order of the unadjusted size will still match upon freeing.
>>>        */
>>> -    if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
>>> +    if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>>>           iova_len = roundup_pow_of_two(iova_len);
>>>       dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>>> index b6cf5f16123b..07ce73fdd8c1 100644
>>> --- a/drivers/iommu/iova.c
>>> +++ b/drivers/iommu/iova.c
>>> @@ -15,6 +15,8 @@
>>>   /* The anchor node sits above the top of the usable address space */
>>>   #define IOVA_ANCHOR    ~0UL
>>> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA 
>>> range size (in pages) */
>>
>> Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?
> 
> Yeah, that may be better. I was just using the same name as before.
> 
>>
>>> +
>>>   static bool iova_rcache_insert(struct iova_domain *iovad,
>>>                      unsigned long pfn,
>>>                      unsigned long size);
>>> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain 
>>> *iovad)
>>>       unsigned int cpu;
>>>       int i;
>>> -    for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>>> +    iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
>>> +
>>> +    iovad->rcaches = kcalloc(iovad->rcache_max_size,
>>> +                 sizeof(*iovad->rcaches), GFP_KERNEL);
>>> +    if (!iovad->rcaches)
>>> +        return;
>>
>> Returning quietly here doesn't seem like the right thing to do. At 
>> least, I
>> don't think the rest of the functions here are checking rcaches against
>> NULL.
>>
> 
> For sure, but that is what other code which can fail here already does, 
> like:
> 
> static void init_iova_rcaches(struct iova_domain *iovad)
> {
>      ...
> 
>      for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>          ...
> 
>          rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), 
> cache_line_size());
>          if (WARN_ON(!rcache->cpu_rcaches))
>              continue;
> }
> 
> and that is not safe either.

Yeah, along with flush queues, historically this has all been 
super-dodgy in terms of failure handling (or lack of).

> This issue was raised a while ago. I don't mind trying to fix it - a 
> slightly painful part is that it touches a few subsystems.

Maybe pull the rcache init out of iova_domain_init() entirely? Only 
iommu-dma uses {alloc,free}_iova_fast(), so TBH it's only a great big 
waste of memory for all the other IOVA domain users anyway.

The other week I started pondering how much of iommu-dma only needs to 
be exposed to the IOMMU core rather than the whole kernel now; I suppose 
there's probably an equal argument to be made for some of these bits of 
the IOVA API, and this might pave the way towards some more logical 
separation, but let's get the functional side dealt with before we worry 
too much about splitting headers.

Robin.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible
@ 2021-08-02 16:09         ` Robin Murphy
  0 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-08-02 16:09 UTC (permalink / raw)
  To: John Garry, Will Deacon
  Cc: mchehab, daniel, mst, airlied, gregkh, jasowang, linuxarm,
	linux-kernel, iommu, thierry.reding, sakari.ailus, bingbu.cao,
	digetx, jonathanh, tian.shu.qiu

On 2021-08-02 16:23, John Garry wrote:
> On 02/08/2021 16:01, Will Deacon wrote:
>> On Wed, Jul 14, 2021 at 06:36:39PM +0800, John Garry wrote:
>>> Some LLDs may request DMA mappings whose IOVA length exceeds that of the
>>> current rcache upper limit.
>>
>> What's an LLD?
>>
> 
> low-level driver
> 
> maybe I'll stick with simply "drivers"
> 
>>> This means that allocations for those IOVAs will never be cached, and
>>> always must be allocated and freed from the RB tree per DMA mapping 
>>> cycle.
>>> This has a significant effect on performance, more so since commit
>>> 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search
>>> fails"), as discussed at [0].
>>>
>>> As a first step towards allowing the rcache range upper limit be
>>> configured, hold this value in the IOVA rcache structure, and allocate
>>> the rcaches separately.
>>>
>>> [0] 
>>> https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ 
>>>
>>>
>>> Signed-off-by: John Garry <john.garry@huawei.com>
>>> ---
>>>   drivers/iommu/dma-iommu.c |  2 +-
>>>   drivers/iommu/iova.c      | 23 +++++++++++++++++------
>>>   include/linux/iova.h      |  4 ++--
>>>   3 files changed, 20 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 98ba927aee1a..4772278aa5da 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -434,7 +434,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct 
>>> iommu_domain *domain,
>>>        * rounding up anything cacheable to make sure that can't 
>>> happen. The
>>>        * order of the unadjusted size will still match upon freeing.
>>>        */
>>> -    if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
>>> +    if (iova_len < (1 << (iovad->rcache_max_size - 1)))
>>>           iova_len = roundup_pow_of_two(iova_len);
>>>       dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>>> index b6cf5f16123b..07ce73fdd8c1 100644
>>> --- a/drivers/iommu/iova.c
>>> +++ b/drivers/iommu/iova.c
>>> @@ -15,6 +15,8 @@
>>>   /* The anchor node sits above the top of the usable address space */
>>>   #define IOVA_ANCHOR    ~0UL
>>> +#define IOVA_RANGE_CACHE_MAX_SIZE 6    /* log of max cached IOVA 
>>> range size (in pages) */
>>
>> Is that the same as an 'order'? i.e. IOVA_RANGE_CACHE_MAX_ORDER?
> 
> Yeah, that may be better. I was just using the same name as before.
> 
>>
>>> +
>>>   static bool iova_rcache_insert(struct iova_domain *iovad,
>>>                      unsigned long pfn,
>>>                      unsigned long size);
>>> @@ -881,7 +883,14 @@ static void init_iova_rcaches(struct iova_domain 
>>> *iovad)
>>>       unsigned int cpu;
>>>       int i;
>>> -    for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>>> +    iovad->rcache_max_size = IOVA_RANGE_CACHE_MAX_SIZE;
>>> +
>>> +    iovad->rcaches = kcalloc(iovad->rcache_max_size,
>>> +                 sizeof(*iovad->rcaches), GFP_KERNEL);
>>> +    if (!iovad->rcaches)
>>> +        return;
>>
>> Returning quietly here doesn't seem like the right thing to do. At 
>> least, I
>> don't think the rest of the functions here are checking rcaches against
>> NULL.
>>
> 
> For sure, but that is what other code which can fail here already does, 
> like:
> 
> static void init_iova_rcaches(struct iova_domain *iovad)
> {
>      ...
> 
>      for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>          ...
> 
>          rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), 
> cache_line_size());
>          if (WARN_ON(!rcache->cpu_rcaches))
>              continue;
> }
> 
> and that is not safe either.

Yeah, along with flush queues, historically this has all been 
super-dodgy in terms of failure handling (or lack of).

> This issue was raised a while ago. I don't mind trying to fix it - a 
> slightly painful part is that it touches a few subsystems.

Maybe pull the rcache init out of iova_domain_init() entirely? Only 
iommu-dma uses {alloc,free}_iova_fast(), so TBH it's only a great big 
waste of memory for all the other IOVA domain users anyway.

The other week I started pondering how much of iommu-dma only needs to 
be exposed to the IOMMU core rather than the whole kernel now; I suppose 
there's probably an equal argument to be made for some of these bits of 
the IOVA API, and this might pave the way towards some more logical 
separation, but let's get the functional side dealt with before we worry 
too much about splitting headers.

Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-08-02 16:06       ` John Garry
@ 2021-08-02 16:16         ` Robin Murphy
  -1 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-08-02 16:16 UTC (permalink / raw)
  To: John Garry, Will Deacon
  Cc: joro, baolu.lu, iommu, linuxarm, thierry.reding, airlied, daniel,
	jonathanh, sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab,
	gregkh, digetx, mst, jasowang, linux-kernel, chenxiang66

On 2021-08-02 17:06, John Garry wrote:
> On 02/08/2021 16:06, Will Deacon wrote:
>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>> rcaches
>>> range.
>>>
>>> Also fix up all users to set this value (at 0, meaning use default).
>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
> 
> Sure, I can do something like that. I actually did have separate along 
> those lines in v3 before I decided to change it.

Y'know, at this point I'm now starting to seriously wonder whether 
moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
things simpler... :/

Does that sound like crazy talk to you, or an idea worth entertaining?

Robin.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-08-02 16:16         ` Robin Murphy
  0 siblings, 0 replies; 43+ messages in thread
From: Robin Murphy @ 2021-08-02 16:16 UTC (permalink / raw)
  To: John Garry, Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, jasowang, tian.shu.qiu

On 2021-08-02 17:06, John Garry wrote:
> On 02/08/2021 16:06, Will Deacon wrote:
>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>> rcaches
>>> range.
>>>
>>> Also fix up all users to set this value (at 0, meaning use default).
>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
> 
> Sure, I can do something like that. I actually did have separate along 
> those lines in v3 before I decided to change it.

Y'know, at this point I'm now starting to seriously wonder whether 
moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
things simpler... :/

Does that sound like crazy talk to you, or an idea worth entertaining?

Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-08-02 16:16         ` Robin Murphy
@ 2021-08-02 16:40           ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 16:40 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: joro, baolu.lu, iommu, linuxarm, thierry.reding, airlied, daniel,
	jonathanh, sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab,
	gregkh, digetx, mst, jasowang, linux-kernel, chenxiang66

On 02/08/2021 17:16, Robin Murphy wrote:
> On 2021-08-02 17:06, John Garry wrote:
>> On 02/08/2021 16:06, Will Deacon wrote:
>>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>>> rcaches
>>>> range.
>>>>
>>>> Also fix up all users to set this value (at 0, meaning use default).
>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>
>> Sure, I can do something like that. I actually did have separate along 
>> those lines in v3 before I decided to change it.
> 
> Y'know, at this point I'm now starting to seriously wonder whether 
> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
> things simpler... :/

As I see, the rcache stuff isn't really specific to IOVA anyway, so it 
seems sane.

> 
> Does that sound like crazy talk to you, or an idea worth entertaining?

If you're going to start moving things, has anyone considered putting 
rcache support in lib as a generic solution to "Magazines and Vmem: .." 
paper?

Thanks,
John

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-08-02 16:40           ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 16:40 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, jasowang, tian.shu.qiu

On 02/08/2021 17:16, Robin Murphy wrote:
> On 2021-08-02 17:06, John Garry wrote:
>> On 02/08/2021 16:06, Will Deacon wrote:
>>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>>> rcaches
>>>> range.
>>>>
>>>> Also fix up all users to set this value (at 0, meaning use default).
>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>
>> Sure, I can do something like that. I actually did have separate along 
>> those lines in v3 before I decided to change it.
> 
> Y'know, at this point I'm now starting to seriously wonder whether 
> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
> things simpler... :/

As I see, the rcache stuff isn't really specific to IOVA anyway, so it 
seems sane.

> 
> Does that sound like crazy talk to you, or an idea worth entertaining?

If you're going to start moving things, has anyone considered putting 
rcache support in lib as a generic solution to "Magazines and Vmem: .." 
paper?

Thanks,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-08-02 16:40           ` John Garry
@ 2021-08-02 17:18             ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 17:18 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, jasowang, tian.shu.qiu

On 02/08/2021 17:40, John Garry wrote:
> On 02/08/2021 17:16, Robin Murphy wrote:
>> On 2021-08-02 17:06, John Garry wrote:
>>> On 02/08/2021 16:06, Will Deacon wrote:
>>>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>>>> rcaches
>>>>> range.
>>>>>
>>>>> Also fix up all users to set this value (at 0, meaning use default).
>>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>>
>>> Sure, I can do something like that. I actually did have separate 
>>> along those lines in v3 before I decided to change it.
>>
>> Y'know, at this point I'm now starting to seriously wonder whether 
>> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
>> things simpler... :/
> 
> As I see, the rcache stuff isn't really specific to IOVA anyway, so it 
> seems sane.
> 
>>
>> Does that sound like crazy talk to you, or an idea worth entertaining?
> 
> If you're going to start moving things, has anyone considered putting 
> rcache support in lib as a generic solution to "Magazines and Vmem: .." 
> paper?

Having said that, I still think that the rcache code has certain 
scalability issues, as discussed before. So making more generic and then 
discarding would be less than ideal.

Thanks,
john

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-08-02 17:18             ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-08-02 17:18 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: mchehab, daniel, mst, airlied, gregkh, jasowang, linuxarm,
	linux-kernel, iommu, thierry.reding, sakari.ailus, bingbu.cao,
	digetx, jonathanh, tian.shu.qiu

On 02/08/2021 17:40, John Garry wrote:
> On 02/08/2021 17:16, Robin Murphy wrote:
>> On 2021-08-02 17:06, John Garry wrote:
>>> On 02/08/2021 16:06, Will Deacon wrote:
>>>> On Wed, Jul 14, 2021 at 06:36:42PM +0800, John Garry wrote:
>>>>> Add max opt argument to init_iova_domain(), and use it to set the 
>>>>> rcaches
>>>>> range.
>>>>>
>>>>> Also fix up all users to set this value (at 0, meaning use default).
>>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>>
>>> Sure, I can do something like that. I actually did have separate 
>>> along those lines in v3 before I decided to change it.
>>
>> Y'know, at this point I'm now starting to seriously wonder whether 
>> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
>> things simpler... :/
> 
> As I see, the rcache stuff isn't really specific to IOVA anyway, so it 
> seems sane.
> 
>>
>> Does that sound like crazy talk to you, or an idea worth entertaining?
> 
> If you're going to start moving things, has anyone considered putting 
> rcache support in lib as a generic solution to "Magazines and Vmem: .." 
> paper?

Having said that, I still think that the rcache code has certain 
scalability issues, as discussed before. So making more generic and then 
discarding would be less than ideal.

Thanks,
john
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
  2021-08-02 16:16         ` Robin Murphy
@ 2021-09-21  8:48           ` John Garry
  -1 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-09-21  8:48 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: joro, baolu.lu, iommu, linuxarm, thierry.reding, airlied, daniel,
	jonathanh, sakari.ailus, bingbu.cao, tian.shu.qiu, mchehab,
	gregkh, digetx, mst, jasowang, linux-kernel, chenxiang66

On 02/08/2021 17:16, Robin Murphy wrote:
>>>>
>>>> Also fix up all users to set this value (at 0, meaning use default).
>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>
>> Sure, I can do something like that. I actually did have separate along 
>> those lines in v3 before I decided to change it.
> 
> Y'know, at this point I'm now starting to seriously wonder whether 
> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
> things simpler... :/
> 
> Does that sound like crazy talk to you, or an idea worth entertaining?

Hi Robin,

JFYI, to try to make inroads into my IOVA issues, I'm going to look to 
do this first, if you don't mind. I think that the fq stuff can also be 
put into a separate structure also, rather than iova_domain, and that 
can also be a member of iommu_dma_cookie.

BTW, with regards to separating the rcache magazine code out, I see 
someone already trying to introduce something similar:

https://lore.kernel.org/lkml/CAKW4uUxperg41z8Lu5QYsS-YEGt1anuD1CuiUqXC0ANFqJBosQ@mail.gmail.com/T/#me4cc5de775ad16ab3d6e7ca854b55f274ddcba08

https://lore.kernel.org/lkml/YUkErK1vVZMht4s8@casper.infradead.org/T/#t

Cheers,
John

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain()
@ 2021-09-21  8:48           ` John Garry
  0 siblings, 0 replies; 43+ messages in thread
From: John Garry @ 2021-09-21  8:48 UTC (permalink / raw)
  To: Robin Murphy, Will Deacon
  Cc: linux-kernel, sakari.ailus, mst, airlied, gregkh, linuxarm,
	jonathanh, iommu, thierry.reding, daniel, bingbu.cao, digetx,
	mchehab, jasowang, tian.shu.qiu

On 02/08/2021 17:16, Robin Murphy wrote:
>>>>
>>>> Also fix up all users to set this value (at 0, meaning use default).
>>> Wrap that in init_iova_domain_defaults() to avoid the mysterious 0?
>>
>> Sure, I can do something like that. I actually did have separate along 
>> those lines in v3 before I decided to change it.
> 
> Y'know, at this point I'm now starting to seriously wonder whether 
> moving the rcaches into iommu_dma_cookie wouldn't make a whole lot of 
> things simpler... :/
> 
> Does that sound like crazy talk to you, or an idea worth entertaining?

Hi Robin,

JFYI, to try to make inroads into my IOVA issues, I'm going to look to 
do this first, if you don't mind. I think that the fq stuff can also be 
put into a separate structure also, rather than iova_domain, and that 
can also be a member of iommu_dma_cookie.

BTW, with regards to separating the rcache magazine code out, I see 
someone already trying to introduce something similar:

https://lore.kernel.org/lkml/CAKW4uUxperg41z8Lu5QYsS-YEGt1anuD1CuiUqXC0ANFqJBosQ@mail.gmail.com/T/#me4cc5de775ad16ab3d6e7ca854b55f274ddcba08

https://lore.kernel.org/lkml/YUkErK1vVZMht4s8@casper.infradead.org/T/#t

Cheers,
John
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2021-09-21  8:45 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15  1:36 [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init kernel test robot
2021-07-19  7:58 ` Dan Carpenter
2021-07-19  7:58 ` Dan Carpenter
2021-07-19  9:12 ` John Garry
2021-07-19  9:12   ` John Garry
2021-07-19  9:32   ` Robin Murphy
2021-07-19  9:32     ` Robin Murphy
2021-07-19 10:45     ` John Garry
2021-07-19 10:45       ` John Garry
  -- strict thread matches above, loose matches on Subject: below --
2021-07-14 10:36 [PATCH v4 0/6] iommu: Allow IOVA rcache range be configured John Garry
2021-07-14 10:36 ` John Garry
2021-07-14 10:36 ` [PATCH v4 1/6] iommu: Refactor iommu_group_store_type() John Garry
2021-07-14 10:36   ` John Garry
2021-08-02 14:46   ` Will Deacon
2021-08-02 14:46     ` Will Deacon
2021-07-14 10:36 ` [PATCH v4 2/6] iova: Allow rcache range upper limit to be flexible John Garry
2021-07-14 10:36   ` John Garry
2021-08-02 15:01   ` Will Deacon
2021-08-02 15:01     ` Will Deacon
2021-08-02 15:23     ` John Garry
2021-08-02 15:23       ` John Garry
2021-08-02 16:09       ` Robin Murphy
2021-08-02 16:09         ` Robin Murphy
2021-07-14 10:36 ` [PATCH v4 3/6] iommu: Allow iommu_change_dev_def_domain() realloc default domain for same type John Garry
2021-07-14 10:36   ` John Garry
2021-07-14 10:36 ` [PATCH v4 4/6] iommu: Allow max opt DMA len be set for a group via sysfs John Garry
2021-07-14 10:36   ` John Garry
2021-07-14 10:36 ` [PATCH v4 5/6] iova: Add iova_len argument to init_iova_domain() John Garry
2021-07-14 10:36   ` John Garry
2021-08-02 15:06   ` Will Deacon
2021-08-02 15:06     ` Will Deacon
2021-08-02 16:06     ` John Garry
2021-08-02 16:06       ` John Garry
2021-08-02 16:16       ` Robin Murphy
2021-08-02 16:16         ` Robin Murphy
2021-08-02 16:40         ` John Garry
2021-08-02 16:40           ` John Garry
2021-08-02 17:18           ` John Garry
2021-08-02 17:18             ` John Garry
2021-09-21  8:48         ` John Garry
2021-09-21  8:48           ` John Garry
2021-07-14 10:36 ` [PATCH v4 6/6] dma-iommu: Pass iova len for IOVA domain init John Garry
2021-07-14 10:36   ` John Garry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.