From: Krishna Reddy <vdumpa@nvidia.com>
To: Will Deacon <will@kernel.org>, Ashish Mhetre <amhetre@nvidia.com>
Cc: "joro@8bytes.org" <joro@8bytes.org>,
"robin.murphy@arm.com" <robin.murphy@arm.com>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: RE: [PATCH 1/2] iommu: Fix race condition during default domain allocation
Date: Fri, 11 Jun 2021 18:30:25 +0000 [thread overview]
Message-ID: <BY5PR12MB3764CB9BBC42426B67537563B3349@BY5PR12MB3764.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20210611104524.GD15274@willie-the-truck>
> > + mutex_lock(&group->mutex);
> > iommu_alloc_default_domain(group, dev);
> > + mutex_unlock(&group->mutex);
>
> It feels wrong to serialise this for everybody just to cater for systems with
> aliasing SIDs between devices.
Serialization is limited to devices in the same group. Unless devices share SID, they wouldn't be in same group.
> Can you provide some more information about exactly what the h/w
> configuration is, and the callstack which exhibits the race, please?
The failure is an after effect and is a page fault. Don't have a failure call stack here. Ashish has traced it through print messages and he can provide them.
From the prints messages, The following was observed in page fault case:
Device1: iommu_probe_device() --> iommu_alloc_default_domain() --> iommu_group_alloc_default_domain() --> __iommu_attach_device(group->default_domain)
Device2: iommu_probe_device() --> iommu_alloc_default_domain() --> iommu_group_alloc_default_domain() --> __iommu_attach_device(group->default_domain)
Both devices(with same SID) are entering into iommu_group_alloc_default_domain() function and each one getting attached to a different group->default_domain
as the second one overwrites group->default_domain after the first one attaches to group->default_domain it has created.
SMMU would be setup to use first domain for the context page table. Whereas all the dma map/unamp requests from second device would
be performed on a domain that is not used by SMMU for context translations and IOVA (not mapped in first domain) accesses from second device lead to page faults.
-KR
next prev parent reply other threads:[~2021-06-11 18:30 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-10 4:16 [PATCH 0/2] iommu/arm-smmu: Fix races in iommu domain/group creation Ashish Mhetre
2021-06-10 4:16 ` [PATCH 1/2] iommu: Fix race condition during default domain allocation Ashish Mhetre
2021-06-11 10:45 ` Will Deacon
2021-06-11 12:49 ` Robin Murphy
2021-06-17 5:51 ` Ashish Mhetre
2021-06-17 17:49 ` Will Deacon
2021-06-11 18:30 ` Krishna Reddy [this message]
2021-06-10 4:16 ` [PATCH 2/2] iommu/arm-smmu: Fix race condition during iommu_group creation Ashish Mhetre
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BY5PR12MB3764CB9BBC42426B67537563B3349@BY5PR12MB3764.namprd12.prod.outlook.com \
--to=vdumpa@nvidia.com \
--cc=amhetre@nvidia.com \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).