iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Baolu Lu <baolu.lu@linux.intel.com>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-pci <linux-pci@vger.kernel.org>,
	iommu@lists.linux.dev
Subject: Re: Question about reserved_regions w/ Intel IOMMU
Date: Thu, 8 Jun 2023 16:28:24 +0100	[thread overview]
Message-ID: <b24a6c7b-27fc-41c0-5c82-15696b4a7dc1@arm.com> (raw)
In-Reply-To: <a1cff65b-b390-3872-25b5-dd6bbfb3524c@linux.intel.com>

On 2023-06-08 04:03, Baolu Lu wrote:
> On 6/8/23 7:03 AM, Alexander Duyck wrote:
>> On Wed, Jun 7, 2023 at 3:40 PM Alexander Duyck
>> <alexander.duyck@gmail.com> wrote:
>>>
>>> I am running into a DMA issue that appears to be a conflict between
>>> ACS and IOMMU. As per the documentation I can find, the IOMMU is
>>> supposed to create reserved regions for MSI and the memory window
>>> behind the root port. However looking at reserved_regions I am not
>>> seeing that. I only see the reservation for the MSI.
>>>
>>> So for example with an enabled NIC and iommu enabled w/o passthru I 
>>> am seeing:
>>> # cat /sys/bus/pci/devices/0000\:83\:00.0/iommu_group/reserved_regions
>>> 0x00000000fee00000 0x00000000feefffff msi
>>>
>>> Shouldn't there also be a memory window for the region behind the root
>>> port to prevent any possible peer-to-peer access?
>>
>> Since the iommu portion of the email bounced I figured I would fix
>> that and provide some additional info.
>>
>> I added some instrumentation to the kernel to dump the resources found
>> in iova_reserve_pci_windows. From what I can tell it is finding the
>> correct resources for the Memory and Prefetchable regions behind the
>> root port. It seems to be calling reserve_iova which is successfully
>> allocating an iova to reserve the region.
>>
>> However still no luck on why it isn't showing up in reserved_regions.
> 
> Perhaps I can ask the opposite question, why it should show up in
> reserve_regions? Why does the iommu subsystem block any possible peer-
> to-peer DMA access? Isn't that a decision of the device driver.
> 
> The iova_reserve_pci_windows() you've seen is for kernel DMA interfaces
> which is not related to peer-to-peer accesses.

Right, in general the IOMMU driver cannot be held responsible for 
whatever might happen upstream of the IOMMU input. The DMA layer carves 
PCI windows out of its IOVA space unconditionally because we know that 
they *might* be problematic, and we don't have any specific constraints 
on our IOVA layout so it's no big deal to just sacrifice some space for 
simplicity. We don't want to have to go digging any further into 
bus-specific code to reason about whether the right ACS capabilities are 
present and enabled everywhere to prevent direct P2P or not. Other 
use-cases may have different requirements, though, so it's up to them 
what they want to do.

It's conceptually pretty much the same as the case where the device (or 
indeed a PCI host bridge or other interconnect segment in-between) has a 
constrained DMA address width - the device may not be able to access all 
of the address space that the IOMMU provides, but the IOMMU itself can't 
tell you that.

Thanks,
Robin.

  parent reply	other threads:[~2023-06-08 15:28 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAKgT0UezciLjHacOx372+v8MZkDf22D5Thn82n-07xxKy_0FTQ@mail.gmail.com>
2023-06-07 23:03 ` Question about reserved_regions w/ Intel IOMMU Alexander Duyck
2023-06-08  3:03   ` Baolu Lu
2023-06-08 14:33     ` Alexander Duyck
2023-06-08 15:38       ` Ashok Raj
2023-06-08 17:10         ` Alexander Duyck
2023-06-08 17:52           ` Ashok Raj
2023-06-08 18:15             ` Alexander Duyck
2023-06-08 18:02           ` Robin Murphy
2023-06-08 18:17             ` Alexander Duyck
2023-06-08 15:28     ` Robin Murphy [this message]
2023-06-13 15:54       ` Jason Gunthorpe
2023-06-16  8:39         ` Tian, Kevin
2023-06-16 12:20           ` Jason Gunthorpe
2023-06-16 15:27             ` Alexander Duyck
2023-06-16 16:34               ` Robin Murphy
2023-06-16 18:59                 ` Jason Gunthorpe
2023-06-19 10:20                   ` Robin Murphy
2023-06-19 14:02                     ` Jason Gunthorpe
2023-06-20 14:57                       ` Alexander Duyck
2023-06-20 16:55                         ` Jason Gunthorpe
2023-06-20 17:47                           ` Alexander Duyck
2023-06-21 11:30                             ` Robin Murphy
2023-06-16 18:48               ` Jason Gunthorpe
2023-06-21  8:16             ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b24a6c7b-27fc-41c0-5c82-15696b4a7dc1@arm.com \
    --to=robin.murphy@arm.com \
    --cc=alexander.duyck@gmail.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).