All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: <iommu@lists.linux-foundation.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: [Patch V2 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation
Date: Fri, 18 Jun 2021 02:00:37 +0530	[thread overview]
Message-ID: <1623961837-12540-3-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

From: Krishna Reddy <vdumpa@nvidia.com>

iommu_group is getting created more than once during asynchronous multiple
display heads(devices) probe on Tegra194 SoC. All the display heads share
same SID and are expected to be in same iommu_group.
As arm_smmu_device_group() is not protecting group creation across devices,
it is leading to multiple groups creation across devices with same SID and
subsequent IOMMU faults.
During race, the iommu_probe_device() call for two display devices is
ending up in arm_smmu_device_group() twice and hence two groups are getting
created. Ideally after group creation for first display device, same group
should be used by second display device.
This race is leading to context faults when one display device is accessing
IOVA from other display device which shouldn't be the case for devices
sharing same SID.
Fix this by protecting group creation with smmu->stream_map_mutex.

Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/arm/arm-smmu/arm-smmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 6f72c4d..21af179 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -1458,6 +1458,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 	struct iommu_group *group = NULL;
 	int i, idx;
 
+	mutex_lock(&smmu->stream_map_mutex);
 	for_each_cfg_sme(cfg, fwspec, i, idx) {
 		if (group && smmu->s2crs[idx].group &&
 		    group != smmu->s2crs[idx].group)
@@ -1466,8 +1467,10 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		group = smmu->s2crs[idx].group;
 	}
 
-	if (group)
+	if (group) {
+		mutex_unlock(&smmu->stream_map_mutex);
 		return iommu_group_ref_get(group);
+	}
 
 	if (dev_is_pci(dev))
 		group = pci_device_group(dev);
@@ -1481,6 +1484,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		for_each_cfg_sme(cfg, fwspec, i, idx)
 			smmu->s2crs[idx].group = group;
 
+	mutex_unlock(&smmu->stream_map_mutex);
 	return group;
 }
 
-- 
2.7.4


WARNING: multiple messages have this Message-ID (diff)
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Patch V2 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation
Date: Fri, 18 Jun 2021 02:00:37 +0530	[thread overview]
Message-ID: <1623961837-12540-3-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

From: Krishna Reddy <vdumpa@nvidia.com>

iommu_group is getting created more than once during asynchronous multiple
display heads(devices) probe on Tegra194 SoC. All the display heads share
same SID and are expected to be in same iommu_group.
As arm_smmu_device_group() is not protecting group creation across devices,
it is leading to multiple groups creation across devices with same SID and
subsequent IOMMU faults.
During race, the iommu_probe_device() call for two display devices is
ending up in arm_smmu_device_group() twice and hence two groups are getting
created. Ideally after group creation for first display device, same group
should be used by second display device.
This race is leading to context faults when one display device is accessing
IOVA from other display device which shouldn't be the case for devices
sharing same SID.
Fix this by protecting group creation with smmu->stream_map_mutex.

Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/arm/arm-smmu/arm-smmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 6f72c4d..21af179 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -1458,6 +1458,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 	struct iommu_group *group = NULL;
 	int i, idx;
 
+	mutex_lock(&smmu->stream_map_mutex);
 	for_each_cfg_sme(cfg, fwspec, i, idx) {
 		if (group && smmu->s2crs[idx].group &&
 		    group != smmu->s2crs[idx].group)
@@ -1466,8 +1467,10 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		group = smmu->s2crs[idx].group;
 	}
 
-	if (group)
+	if (group) {
+		mutex_unlock(&smmu->stream_map_mutex);
 		return iommu_group_ref_get(group);
+	}
 
 	if (dev_is_pci(dev))
 		group = pci_device_group(dev);
@@ -1481,6 +1484,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		for_each_cfg_sme(cfg, fwspec, i, idx)
 			smmu->s2crs[idx].group = group;
 
+	mutex_unlock(&smmu->stream_map_mutex);
 	return group;
 }
 
-- 
2.7.4

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: <iommu@lists.linux-foundation.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: [Patch V2 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation
Date: Fri, 18 Jun 2021 02:00:37 +0530	[thread overview]
Message-ID: <1623961837-12540-3-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

From: Krishna Reddy <vdumpa@nvidia.com>

iommu_group is getting created more than once during asynchronous multiple
display heads(devices) probe on Tegra194 SoC. All the display heads share
same SID and are expected to be in same iommu_group.
As arm_smmu_device_group() is not protecting group creation across devices,
it is leading to multiple groups creation across devices with same SID and
subsequent IOMMU faults.
During race, the iommu_probe_device() call for two display devices is
ending up in arm_smmu_device_group() twice and hence two groups are getting
created. Ideally after group creation for first display device, same group
should be used by second display device.
This race is leading to context faults when one display device is accessing
IOVA from other display device which shouldn't be the case for devices
sharing same SID.
Fix this by protecting group creation with smmu->stream_map_mutex.

Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/arm/arm-smmu/arm-smmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 6f72c4d..21af179 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -1458,6 +1458,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 	struct iommu_group *group = NULL;
 	int i, idx;
 
+	mutex_lock(&smmu->stream_map_mutex);
 	for_each_cfg_sme(cfg, fwspec, i, idx) {
 		if (group && smmu->s2crs[idx].group &&
 		    group != smmu->s2crs[idx].group)
@@ -1466,8 +1467,10 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		group = smmu->s2crs[idx].group;
 	}
 
-	if (group)
+	if (group) {
+		mutex_unlock(&smmu->stream_map_mutex);
 		return iommu_group_ref_get(group);
+	}
 
 	if (dev_is_pci(dev))
 		group = pci_device_group(dev);
@@ -1481,6 +1484,7 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
 		for_each_cfg_sme(cfg, fwspec, i, idx)
 			smmu->s2crs[idx].group = group;
 
+	mutex_unlock(&smmu->stream_map_mutex);
 	return group;
 }
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-06-17 20:31 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-17 20:30 [Patch V2 0/2] iommu/arm-smmu: Fix races in iommu domain/group creation Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre
2021-06-17 20:30 ` [Patch V2 1/2] iommu: Fix race condition during default domain allocation Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre [this message]
2021-06-17 20:30   ` [Patch V2 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-07-15  4:44 ` [Patch V2 0/2] iommu/arm-smmu: Fix races in iommu domain/group creation Ashish Mhetre
2021-07-15  4:44   ` Ashish Mhetre
2021-07-15  4:44   ` Ashish Mhetre via iommu
2021-08-02 15:16 ` Will Deacon
2021-08-02 15:16   ` Will Deacon
2021-08-02 15:16   ` Will Deacon
2021-08-02 15:46   ` Robin Murphy
2021-08-02 15:46     ` Robin Murphy
2021-08-02 15:46     ` Robin Murphy
2021-08-09 14:54     ` Will Deacon
2021-08-09 14:54       ` Will Deacon
2021-08-09 14:54       ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1623961837-12540-3-git-send-email-amhetre@nvidia.com \
    --to=amhetre@nvidia.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=vdumpa@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.