iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Alex Williamson <alex.williamson@redhat.com>,
	"Tian, Kevin" <kevin.tian@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>
Subject: Re: RMRR device on non-Intel platform
Date: Thu, 20 Apr 2023 15:19:55 +0100	[thread overview]
Message-ID: <6cce1c5d-ab50-41c4-6e62-661bc369d860@arm.com> (raw)
In-Reply-To: <20230420081539.6bf301ad.alex.williamson@redhat.com>

On 2023-04-20 15:15, Alex Williamson wrote:
> On Thu, 20 Apr 2023 06:52:01 +0000
> "Tian, Kevin" <kevin.tian@intel.com> wrote:
> 
>> Hi, Alex,
>>
>> Happen to see that we may have inconsistent policy about RMRR devices cross
>> different vendors.
>>
>> Previously only Intel supports RMRR. Now both AMD/ARM have similar thing,
>> AMD IVMD and ARM RMR.
> 
> Any similar requirement imposed by system firmware that the operating
> system must perpetually maintain a specific IOVA mapping for the device
> should impose similar restrictions as we've implemented for VT-d
> RMMR[1].  Thanks,

Hmm, does that mean that vfio_iommu_resv_exclude() going to the trouble 
of punching out all the reserved region holes isn't really needed?

Robin.

> 
> Alex
> 
> [1]https://access.redhat.com/sites/default/files/attachments/rmrr-wp1.pdf
> 
>> RMRR identity mapping was considered unsafe (except for USB/GPU) for
>> device assignment:
>>
>> /*
>>   * There are a couple cases where we need to restrict the functionality of
>>   * devices associated with RMRRs.  The first is when evaluating a device for
>>   * identity mapping because problems exist when devices are moved in and out
>>   * of domains and their respective RMRR information is lost.  This means that
>>   * a device with associated RMRRs will never be in a "passthrough" domain.
>>   * The second is use of the device through the IOMMU API.  This interface
>>   * expects to have full control of the IOVA space for the device.  We cannot
>>   * satisfy both the requirement that RMRR access is maintained and have an
>>   * unencumbered IOVA space.  We also have no ability to quiesce the device's
>>   * use of the RMRR space or even inform the IOMMU API user of the restriction.
>>   * We therefore prevent devices associated with an RMRR from participating in
>>   * the IOMMU API, which eliminates them from device assignment.
>>   *
>>   * In both cases, devices which have relaxable RMRRs are not concerned by this
>>   * restriction. See device_rmrr_is_relaxable comment.
>>   */
>> static bool device_is_rmrr_locked(struct device *dev)
>> {
>> 	if (!device_has_rmrr(dev))
>> 		return false;
>>
>> 	if (device_rmrr_is_relaxable(dev))
>> 		return false;
>>
>> 	return true;
>> }
>>
>> Then non-relaxable RMRR device is rejected when doing attach:
>>
>> static int intel_iommu_attach_device(struct iommu_domain *domain,
>>                                       struct device *dev)
>> {
>> 	struct device_domain_info *info = dev_iommu_priv_get(dev);
>> 	int ret;
>>
>> 	if (domain->type == IOMMU_DOMAIN_UNMANAGED &&
>> 	    device_is_rmrr_locked(dev)) {
>> 		dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.\n");
>> 		return -EPERM;
>> 	}
>> 	...
>> }
>>
>> But I didn't find the same check in AMD/ARM driver at a glance.
>>
>> Did I overlook some arch difference which makes RMRR device safe in
>> those platforms or is it a gap to be fixed?
>>
>> Thanks
>> Kevin
>>
> 

  reply	other threads:[~2023-04-20 14:20 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-20  6:52 RMRR device on non-Intel platform Tian, Kevin
2023-04-20 14:15 ` Alex Williamson
2023-04-20 14:19   ` Robin Murphy [this message]
2023-04-20 14:49     ` Alex Williamson
2023-04-20 16:55       ` Robin Murphy
2023-04-20 21:49         ` Alex Williamson
2023-04-21  4:10           ` Tian, Kevin
2023-04-21 11:33             ` Jason Gunthorpe
2023-04-21 11:34             ` Robin Murphy
2023-04-23  8:23               ` Tian, Kevin
2023-04-21 12:04           ` Jason Gunthorpe
2023-04-21 12:29             ` Robin Murphy
2023-04-21 12:45               ` Jason Gunthorpe
2023-04-21 17:22                 ` Robin Murphy
2023-04-21 17:58                   ` Jason Gunthorpe
2023-04-25 14:48                     ` Robin Murphy
2023-04-25 15:58                       ` Jason Gunthorpe
2023-04-26  8:39                         ` Tian, Kevin
2023-04-26 12:24                         ` Robin Murphy
2023-04-26 12:58                           ` Jason Gunthorpe
2023-04-25 16:37                     ` Nicolin Chen
2023-04-26 11:57                     ` Jason Gunthorpe
2023-04-26 13:53                       ` Robin Murphy
2023-04-26 14:17                         ` Jason Gunthorpe
2023-04-21 13:21             ` Baolu Lu
2023-04-21 13:33               ` Jason Gunthorpe
2023-04-23  8:24             ` Tian, Kevin
2023-04-24  2:50               ` Baolu Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6cce1c5d-ab50-41c4-6e62-661bc369d860@arm.com \
    --to=robin.murphy@arm.com \
    --cc=alex.williamson@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).