From: Sven Peter via iommu <email@example.com>
Cc: Arnd Bergmann <firstname.lastname@example.org>, Will Deacon <email@example.com>,
Hector Martin <firstname.lastname@example.org>,
email@example.com, Alexander Graf <firstname.lastname@example.org>,
Mohamed Mediouni <email@example.com>,
Robin Murphy <firstname.lastname@example.org>
Subject: [RFC PATCH 0/3] iommu/dma-iommu: Support IOMMU page size larger than the CPU page size
Date: Fri, 6 Aug 2021 17:55:20 +0200 [thread overview]
Message-ID: <email@example.com> (raw)
On the Apple M1 there's this slightly annoying situation where the DART IOMMU
has a hard-wired page size of 16KB. Additionally, the DARTs for some hardware
(USB A ports, WiFi, Ethernet, Thunderbolt PCIe) cannot be switched to bypass
mode and it's also not easily possible to program a software bypass mode.
This is a problem for kernels configured with 4K pages. Unfortunately,
most distributions ship with those by default.
There's not much that can be done for IOMMU_DOMAIN_UNMANAGED domains since
most API clients likely expect to be able to map single CPU pages.
For IOMMU_DOMAIN_DMA domains however, dma-iommu.c is the only code that
uses the raw IOMMU API to manage these domains and can possibly be adapted
to still work correctly.
Essentially, I changed some relevant alignments to happen with respect to both
PAGE_SIZE and iovad->granule. The sglist code also can't use the optimization
for a single IOVA allocation anymore since most phys_addrs will not be aligned
to the IOMMU page size.
I'd like to get some early feedback on this approach to see if it's feasible
to continue working on this or if a different approach will work better or if
this setup just won't be supported.
I'm not very confident I've covered all necessary cases but I'll take
a closer look at every function in dma-iommu.c if there's a chance that
this will be accepted eventually. The current changes are enough to boot
from a USB device and use the Ethernet adapter on my M1 Mini with 4kb pages
One issue I see is that this will end up wasting memory. There's e.g.
dma_pool_*, which will dma_alloc_coherent PAGE_SIZE bytes and stuff the individual
allocations into those buffers. These will get padded to SZ_16K but dma_pool will
be completely unaware that it got 4x as much memory as requested and will leave
it unused :-(
The other issue I'm aware of is v4l2 which expects that a page-aligned sglist
can be represented contiguously in IOVA space .
Sven Peter (3):
iommu: Move IOMMU pagesize check to attach_device
iommu/dma-iommu: Support iovad->granule > PAGE_SIZE
iommu: Introduce __IOMMU_DOMAIN_LARGE_PAGES
drivers/iommu/dma-iommu.c | 87 ++++++++++++++++++++++++++++++++++-----
drivers/iommu/iommu.c | 36 ++++++++++++++--
drivers/iommu/iova.c | 7 ++--
include/linux/iommu.h | 14 ++++---
4 files changed, 123 insertions(+), 21 deletions(-)
iommu mailing list
next reply other threads:[~2021-08-06 15:57 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-06 15:55 Sven Peter via iommu [this message]
2021-08-06 15:55 ` [RFC PATCH 1/3] iommu: Move IOMMU pagesize check to attach_device Sven Peter via iommu
2021-08-06 15:55 ` [RFC PATCH 2/3] iommu/dma-iommu: Support iovad->granule > PAGE_SIZE Sven Peter via iommu
2021-08-06 18:04 ` Robin Murphy
2021-08-07 8:41 ` Sven Peter via iommu
2021-08-09 18:37 ` Robin Murphy
2021-08-09 19:57 ` Sven Peter via iommu
2021-08-07 11:47 ` Sven Peter via iommu
2021-08-09 17:41 ` Robin Murphy
2021-08-09 20:45 ` Sven Peter via iommu
2021-08-10 9:51 ` Robin Murphy
2021-08-11 20:18 ` Sven Peter via iommu
2021-08-12 12:43 ` Robin Murphy
2021-08-06 15:55 ` [RFC PATCH 3/3] iommu: Introduce __IOMMU_DOMAIN_LARGE_PAGES Sven Peter via iommu
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).