From: Lu Baolu <baolu.lu@linux.intel.com>
To: Nadav Amit <nadav.amit@gmail.com>
Cc: chenjiashang <chenjiashang@huawei.com>,
"Tian, Kevin" <kevin.tian@intel.com>,
"alex.williamson@redhat.com" <alex.williamson@redhat.com>,
David Woodhouse <dwmw2@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
"Gonglei \(Arei\)" <arei.gonglei@huawei.com>,
"Longpeng \(Mike,
Cloud Infrastructure Service Product Dept.\)"
<longpeng2@huawei.com>, "will@kernel.org" <will@kernel.org>
Subject: Re: A problem of Intel IOMMU hardware ?
Date: Sat, 27 Mar 2021 13:27:59 +0800 [thread overview]
Message-ID: <a49ec650-5dae-0045-1ea3-1978009b3b1f@linux.intel.com> (raw)
In-Reply-To: <55A4B205-BC38-4F16-9EB9-54026C326E60@gmail.com>
Hi Nadav,
On 3/27/21 12:36 PM, Nadav Amit wrote:
>
>
>> On Mar 26, 2021, at 7:31 PM, Lu Baolu <baolu.lu@linux.intel.com> wrote:
>>
>> Hi Nadav,
>>
>> On 3/19/21 12:46 AM, Nadav Amit wrote:
>>> So here is my guess:
>>> Intel probably used as a basis for the IOTLB an implementation of
>>> some other (regular) TLB design.
>>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>>> "Software wishing to prevent this uncertainty should not write to
>>> a paging-structure entry in a way that would change, for any linear
>>> address, both the page size and either the page frame, access rights,
>>> or other attributes.”
>>> Now the aforementioned uncertainty is a bit different (multiple
>>> *valid* translations of a single address). Yet, perhaps this is
>>> yet another thing that might happen.
>>> From a brief look on the handling of MMU (not IOMMU) hugepages
>>> in Linux, indeed the PMD is first cleared and flushed before a
>>> new valid PMD is set. This is possible for MMUs since they
>>> allow the software to handle spurious page-faults gracefully.
>>> This is not the case for the IOMMU though (without PRI).
>>> Not sure this explains everything though. If that is the problem,
>>> then during a mapping that changes page-sizes, a TLB flush is
>>> needed, similarly to the one Longpeng did manually.
>>
>> I have been working with Longpeng on this issue these days. It turned
>> out that your guess is right. The PMD is first cleared but not flushed
>> before a new valid one is set. The previous entry might be cached in the
>> paging structure caches hence leads to disaster.
>>
>> In __domain_mapping():
>>
>> 2352 /*
>> 2353 * Ensure that old small page tables are
>> 2354 * removed to make room for superpage(s).
>> 2355 * We're adding new large pages, so make sure
>> 2356 * we don't remove their parent tables.
>> 2357 */
>> 2358 dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
>> 2359 largepage_lvl + 1);
>>
>> I guess adding a cache flush operation after PMD switching should solve
>> the problem.
>>
>> I am still not clear about this comment:
>>
>> "
>> This is possible for MMUs since they allow the software to handle
>> spurious page-faults gracefully. This is not the case for the IOMMU
>> though (without PRI).
>> "
>>
>> Can you please shed more light on this?
>
> I was looking at the code in more detail, and apparently my concern
> is incorrect.
>
> I was under the assumption that the IOMMU map/unmap can merge/split
> (specifically split) huge-pages. For instance, if you map 2MB and
> then unmap 4KB out of the 2MB, then you would split the hugepage
> and keep the rest of the mappings alive. This is the way MMU is
> usually managed. To my defense, I also saw such partial unmappings
> in Longpeng’s first scenario.
>
> If this was possible, then you would have a case in which out of 2MB
> (for instance), 4KB were unmapped, and you need to split the 2MB
> hugepage into 4KB pages. If you try to clear the PMD, flush, and then
> set the PMD to point to table with live 4KB PTES, you can have
> an interim state in which the PMD is not present. DMAs that arrive
> at this stage might fault, and without PRI (and device support)
> you do not have a way of restarting the DMA after the hugepage split
> is completed.
Get you and thanks a lot for sharing.
For current IOMMU usage, I can't see any case to split a huge page into
4KB pages, but in the near future, we do have a need of splitting huge
pages. For example, when we want to use the A/D bit to track the DMA
dirty pages during VM migration, it's an optimization if we could split
a huge page into 4K ones. So far, the solution I have considered is:
1) Prepare the split subtables in advance;
[it's identical to the existing one only use 4k pages instead of huge
page.]
2) Switch the super (huge) page's leaf entry;
[at this point, hardware could use both subtables. I am not sure
whether the hardware allows a dynamic switch of page table entry
from on valid entry to another valid one.]
3) Flush the cache.
[hardware will use the new subtable]
As long as we can make sure that the old subtable won't be used by
hardware, we can safely release the old table.
>
> Anyhow, this concern is apparently not relevant. I guess I was too
> naive to assume the IOMMU management is similar to the MMU. I now
> see that there is a comment in intel_iommu_unmap() saying:
>
> /* Cope with horrid API which requires us to unmap more than the
> size argument if it happens to be a large-page mapping. */
>
> Regards,
> Nadav
>
Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-03-27 5:37 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-17 3:16 A problem of Intel IOMMU hardware ? Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 5:16 ` Lu Baolu
2021-03-17 9:40 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 15:18 ` Alex Williamson
2021-03-18 2:58 ` Lu Baolu
2021-03-18 4:46 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 7:48 ` Nadav Amit
2021-03-17 5:46 ` Nadav Amit
2021-03-17 9:35 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 18:12 ` Nadav Amit
2021-03-18 3:03 ` Lu Baolu
2021-03-18 8:20 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 8:27 ` Tian, Kevin
2021-03-18 8:38 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 8:43 ` Tian, Kevin
2021-03-18 8:54 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 8:56 ` Tian, Kevin
2021-03-18 9:25 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 16:46 ` Nadav Amit
2021-03-21 23:51 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-22 0:27 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-27 2:31 ` Lu Baolu
2021-03-27 4:36 ` Nadav Amit
2021-03-27 5:27 ` Lu Baolu [this message]
2021-03-19 0:15 ` Lu Baolu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a49ec650-5dae-0045-1ea3-1978009b3b1f@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=alex.williamson@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=chenjiashang@huawei.com \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux-foundation.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longpeng2@huawei.com \
--cc=nadav.amit@gmail.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).