All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: <iommu@lists.linux-foundation.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: [Patch V2 1/2] iommu: Fix race condition during default domain allocation
Date: Fri, 18 Jun 2021 02:00:36 +0530	[thread overview]
Message-ID: <1623961837-12540-2-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

Domain is getting created more than once during asynchronous multiple
display heads(devices) probe. All the display heads share same SID and
are expected to be in same domain. As iommu_alloc_default_domain() call
is not protected, it ends up in creating two domains for two display
devices which should ideally be in same domain.
iommu_alloc_default_domain() checks whether domain is already allocated for
given iommu group, but due to this race the check condition is failing and
two different domains are getting created.
This is leading to context faults when one device is accessing the IOVA
mapped by other device.
Fix this by protecting iommu_alloc_default_domain() call with group->mutex.
With this fix serialization will happen only for the devices sharing same
group. Also, only first device in group will hold the mutex till group is
created and for rest of the devices it will just check for existing domain
and then release the mutex.

Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/iommu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 808ab70..2700500 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -273,7 +273,9 @@ int iommu_probe_device(struct device *dev)
 	 * support default domains, so the return value is not yet
 	 * checked.
 	 */
+	mutex_lock(&group->mutex);
 	iommu_alloc_default_domain(group, dev);
+	mutex_unlock(&group->mutex);
 
 	if (group->default_domain) {
 		ret = __iommu_attach_device(group->default_domain, dev);
-- 
2.7.4


WARNING: multiple messages have this Message-ID (diff)
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Patch V2 1/2] iommu: Fix race condition during default domain allocation
Date: Fri, 18 Jun 2021 02:00:36 +0530	[thread overview]
Message-ID: <1623961837-12540-2-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

Domain is getting created more than once during asynchronous multiple
display heads(devices) probe. All the display heads share same SID and
are expected to be in same domain. As iommu_alloc_default_domain() call
is not protected, it ends up in creating two domains for two display
devices which should ideally be in same domain.
iommu_alloc_default_domain() checks whether domain is already allocated for
given iommu group, but due to this race the check condition is failing and
two different domains are getting created.
This is leading to context faults when one device is accessing the IOVA
mapped by other device.
Fix this by protecting iommu_alloc_default_domain() call with group->mutex.
With this fix serialization will happen only for the devices sharing same
group. Also, only first device in group will hold the mutex till group is
created and for rest of the devices it will just check for existing domain
and then release the mutex.

Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/iommu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 808ab70..2700500 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -273,7 +273,9 @@ int iommu_probe_device(struct device *dev)
 	 * support default domains, so the return value is not yet
 	 * checked.
 	 */
+	mutex_lock(&group->mutex);
 	iommu_alloc_default_domain(group, dev);
+	mutex_unlock(&group->mutex);
 
 	if (group->default_domain) {
 		ret = __iommu_attach_device(group->default_domain, dev);
-- 
2.7.4

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Ashish Mhetre <amhetre@nvidia.com>
To: <amhetre@nvidia.com>, <robin.murphy@arm.com>, <will@kernel.org>,
	<vdumpa@nvidia.com>
Cc: <iommu@lists.linux-foundation.org>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>
Subject: [Patch V2 1/2] iommu: Fix race condition during default domain allocation
Date: Fri, 18 Jun 2021 02:00:36 +0530	[thread overview]
Message-ID: <1623961837-12540-2-git-send-email-amhetre@nvidia.com> (raw)
In-Reply-To: <1623961837-12540-1-git-send-email-amhetre@nvidia.com>

Domain is getting created more than once during asynchronous multiple
display heads(devices) probe. All the display heads share same SID and
are expected to be in same domain. As iommu_alloc_default_domain() call
is not protected, it ends up in creating two domains for two display
devices which should ideally be in same domain.
iommu_alloc_default_domain() checks whether domain is already allocated for
given iommu group, but due to this race the check condition is failing and
two different domains are getting created.
This is leading to context faults when one device is accessing the IOVA
mapped by other device.
Fix this by protecting iommu_alloc_default_domain() call with group->mutex.
With this fix serialization will happen only for the devices sharing same
group. Also, only first device in group will hold the mutex till group is
created and for rest of the devices it will just check for existing domain
and then release the mutex.

Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
---
Changes since V1:
- Update the commit message per Will's suggestion

 drivers/iommu/iommu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 808ab70..2700500 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -273,7 +273,9 @@ int iommu_probe_device(struct device *dev)
 	 * support default domains, so the return value is not yet
 	 * checked.
 	 */
+	mutex_lock(&group->mutex);
 	iommu_alloc_default_domain(group, dev);
+	mutex_unlock(&group->mutex);
 
 	if (group->default_domain) {
 		ret = __iommu_attach_device(group->default_domain, dev);
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-17 20:31 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-17 20:30 [Patch V2 0/2] iommu/arm-smmu: Fix races in iommu domain/group creation Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre
2021-06-17 20:30 ` Ashish Mhetre [this message]
2021-06-17 20:30   ` [Patch V2 1/2] iommu: Fix race condition during default domain allocation Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-06-17 20:30 ` [Patch V2 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-06-17 20:30   ` Ashish Mhetre
2021-07-15  4:44 ` [Patch V2 0/2] iommu/arm-smmu: Fix races in iommu domain/group creation Ashish Mhetre
2021-07-15  4:44   ` Ashish Mhetre
2021-07-15  4:44   ` Ashish Mhetre via iommu
2021-08-02 15:16 ` Will Deacon
2021-08-02 15:16   ` Will Deacon
2021-08-02 15:16   ` Will Deacon
2021-08-02 15:46   ` Robin Murphy
2021-08-02 15:46     ` Robin Murphy
2021-08-02 15:46     ` Robin Murphy
2021-08-09 14:54     ` Will Deacon
2021-08-09 14:54       ` Will Deacon
2021-08-09 14:54       ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1623961837-12540-2-git-send-email-amhetre@nvidia.com \
    --to=amhetre@nvidia.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=vdumpa@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.