From: Robin Murphy <robin.murphy@arm.com>
To: Sven Peter <sven@svenpeter.dev>,
Sven Peter <iommu@lists.linux-foundation.org>
Cc: Arnd Bergmann <arnd@kernel.org>, Hector Martin <marcan@marcan.st>,
linux-kernel@vger.kernel.org, Alexander Graf <graf@amazon.com>,
Mohamed Mediouni <mohamed.mediouni@caramail.com>,
Will Deacon <will@kernel.org>
Subject: Re: [RFC PATCH 2/3] iommu/dma-iommu: Support iovad->granule > PAGE_SIZE
Date: Mon, 9 Aug 2021 18:41:46 +0100 [thread overview]
Message-ID: <5002ed91-416c-d7ee-b1ab-a50c590749c2@arm.com> (raw)
In-Reply-To: <dadbd8b0-171a-4008-8a2e-f68abfed9285@www.fastmail.com>
On 2021-08-07 12:47, Sven Peter via iommu wrote:
>
>
> On Fri, Aug 6, 2021, at 20:04, Robin Murphy wrote:
>> On 2021-08-06 16:55, Sven Peter via iommu wrote:
>>> @@ -1006,6 +1019,31 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>>> if (dev_is_untrusted(dev))
>>> return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);
>>>
>>> + /*
>>> + * If the IOMMU pagesize is larger than the CPU pagesize we will
>>> + * very likely run into sgs with a physical address that is not aligned
>>> + * to an IOMMU page boundary. Fall back to just mapping every entry
>>> + * independently with __iommu_dma_map then.
>>
>> Scatterlist segments often don't have nicely aligned ends, which is why
>> we already align things to IOVA granules in main loop here. I think in
>> principle we'd just need to move the non-IOVA-aligned part of the
>> address from sg->page to sg->offset in the temporary transformation for
>> the rest of the assumptions to hold. I don't blame you for being timid
>> about touching that, though - it took me 3 tries to get right when I
>> first wrote it...
>>
>
>
> I've spent some time with that code now and I think we cannot use it
> but have to fall back to iommu_dma_map_sg_swiotlb (even though that swiotlb
> part is a lie then):
>
> When we have sg_phys(s) = 0x802e65000 with s->offset = 0 the paddr
> is aligned to PAGE_SIZE but has an offset of 0x1000 from something
> the IOMMU can map.
> Now this would result in s->offset = -0x1000 which is already weird
> enough.
> Offset is unsigned (and 32bit) so this will actually look like
> s->offset = 0xfffff000 then, which isn't much better.
> And then sg_phys(s) = 0x902e64000 (instead of 0x802e64000) and
> we'll map some random memory in iommu_map_sg_atomic and a little bit later
> everything explodes.
>
> Now I could probably adjust the phys addr backwards and make sure offset is
> always positive (and possibly larger than PAGE_SIZE) and later restore it
> in __finalise_sg then but I feel like that's pushing this a little bit too far.
Yes, that's what I meant. At a quick guess, something like the
completely untested diff below. It really comes down to what we want to
achieve here - if it's just to make this thing work at all, then I'd
favour bolting on the absolute minimum changes, possibly even cheating
by tainting the kernel and saying all bets are off instead of trying to
handle the more involved corners really properly. However if you want to
work towards this being a properly-supported thing, then I think it's
worth generalising the existing assumptions of page alignment from the
beginning.
Robin.
----->8-----
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 58cc7297aab5..73aeaa8bfc73 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -895,8 +895,8 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
unsigned int s_length = sg_dma_len(s);
unsigned int s_iova_len = s->length;
- s->offset += s_iova_off;
- s->length = s_length;
+ sg_set_page(s, phys_to_page(sg_phys(s)), s_length,
+ s_iova_off & ~PAGE_MASK);
sg_dma_address(s) = DMA_MAPPING_ERROR;
sg_dma_len(s) = 0;
@@ -940,10 +940,9 @@ static void __invalidate_sg(struct scatterlist *sg, int nents)
int i;
for_each_sg(sg, s, nents, i) {
- if (sg_dma_address(s) != DMA_MAPPING_ERROR)
- s->offset += sg_dma_address(s);
if (sg_dma_len(s))
- s->length = sg_dma_len(s);
+ sg_set_page(s, phys_to_page(sg_phys(s)), sg_dma_len(s),
+ sg_dma_address(s) & ~PAGE_MASK);
sg_dma_address(s) = DMA_MAPPING_ERROR;
sg_dma_len(s) = 0;
}
@@ -1019,15 +1018,16 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
* stashing the unaligned parts in the as-yet-unused DMA fields.
*/
for_each_sg(sg, s, nents, i) {
- size_t s_iova_off = iova_offset(iovad, s->offset);
+ phys_addr_t s_phys = sg_phys(s);
+ size_t s_iova_off = iova_offset(iovad, s_phys);
size_t s_length = s->length;
size_t pad_len = (mask - iova_len + 1) & mask;
sg_dma_address(s) = s_iova_off;
sg_dma_len(s) = s_length;
- s->offset -= s_iova_off;
s_length = iova_align(iovad, s_length + s_iova_off);
- s->length = s_length;
+ sg_set_page(s, phys_to_page(s_phys - s_iova_off),
+ s_length, s->offset ^ s_iova_off);
/*
* Due to the alignment of our single IOVA allocation, we can
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-08-09 17:42 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-06 15:55 [RFC PATCH 0/3] iommu/dma-iommu: Support IOMMU page size larger than the CPU page size Sven Peter via iommu
2021-08-06 15:55 ` [RFC PATCH 1/3] iommu: Move IOMMU pagesize check to attach_device Sven Peter via iommu
2021-08-06 15:55 ` [RFC PATCH 2/3] iommu/dma-iommu: Support iovad->granule > PAGE_SIZE Sven Peter via iommu
2021-08-06 18:04 ` Robin Murphy
2021-08-07 8:41 ` Sven Peter via iommu
2021-08-09 18:37 ` Robin Murphy
2021-08-09 19:57 ` Sven Peter via iommu
2021-08-07 11:47 ` Sven Peter via iommu
2021-08-09 17:41 ` Robin Murphy [this message]
2021-08-09 20:45 ` Sven Peter via iommu
2021-08-10 9:51 ` Robin Murphy
2021-08-11 20:18 ` Sven Peter via iommu
2021-08-12 12:43 ` Robin Murphy
2021-08-06 15:55 ` [RFC PATCH 3/3] iommu: Introduce __IOMMU_DOMAIN_LARGE_PAGES Sven Peter via iommu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5002ed91-416c-d7ee-b1ab-a50c590749c2@arm.com \
--to=robin.murphy@arm.com \
--cc=arnd@kernel.org \
--cc=graf@amazon.com \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marcan@marcan.st \
--cc=mohamed.mediouni@caramail.com \
--cc=sven@svenpeter.dev \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).