linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: "Raj, Ashok" <ashok.raj@intel.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Shenming Lu <lushenming@huawei.com>,
	"Alex Williamson (alex.williamson@redhat.com)" 
	<alex.williamson@redhat.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Jason Wang <jasowang@redhat.com>,
	"parav@mellanox.com" <parav@mellanox.com>,
	"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Joerg Roedel <joro@8bytes.org>,
	Eric Auger <eric.auger@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
	"Jiang, Dave" <dave.jiang@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	Robin Murphy <robin.murphy@arm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	David Woodhouse <dwmw2@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	"wanghaibin.wang@huawei.com" <wanghaibin.wang@huawei.com>
Subject: Re: [RFC v2] /dev/iommu uAPI proposal
Date: Thu, 15 Jul 2021 14:18:26 -0300	[thread overview]
Message-ID: <20210715171826.GG543781@nvidia.com> (raw)
In-Reply-To: <20210715162141.GA593686@otc-nc-03>

On Thu, Jul 15, 2021 at 09:21:41AM -0700, Raj, Ashok wrote:
> On Thu, Jul 15, 2021 at 12:23:25PM -0300, Jason Gunthorpe wrote:
> > On Thu, Jul 15, 2021 at 06:57:57AM -0700, Raj, Ashok wrote:
> > > On Thu, Jul 15, 2021 at 09:48:13AM -0300, Jason Gunthorpe wrote:
> > > > On Thu, Jul 15, 2021 at 06:49:54AM +0000, Tian, Kevin wrote:
> > > > 
> > > > > No. You are right on this case. I don't think there is a way to 
> > > > > differentiate one mdev from the other if they come from the
> > > > > same parent and attached by the same guest process. In this
> > > > > case the fault could be reported on either mdev (e.g. the first
> > > > > matching one) to get it fixed in the guest.
> > > > 
> > > > If the IOMMU can't distinguish the two mdevs they are not isolated
> > > > and would have to share a group. Since group sharing is not supported
> > > > today this seems like a non-issue
> > > 
> > > Does this mean we have to prevent 2 mdev's from same pdev being assigned to
> > > the same guest? 
> > 
> > No, it means that the IOMMU layer has to be able to distinguish them.
> 
> Ok, the guest has no control over it, as it see 2 separate pci devices and
> thinks they are all different.
> 
> Only time when it can fail is during the bind operation. From guest
> perspective a bind in vIOMMU just turns into a write to local table and a
> invalidate will cause the host to update the real copy from the shadow.
> 
> There is no way to fail the bind? and Allocation of the PASID is also a
> separate operation and has no clue how its going to be used in the guest.

You can't attach the same RID to the same PASID twice. The IOMMU code
should prevent this.

As we've talked about several times, it seems to me the vIOMMU
interface is misdesigned for the requirements you have. The hypervisor
should have a role in allocating the PASID since there are invisible
hypervisor restrictions. This is one of them.

> Do we have any isolation requirements here? its the same process. So if the
> page-request it sent to guest and even if you report it for mdev1, after
> the PRQ is resolved by guest, the request from mdev2 from the same guest
> should simply work?

I think we already talked about this and said it should not be done.

Jason

  reply	other threads:[~2021-07-15 17:18 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  7:48 [RFC v2] /dev/iommu uAPI proposal Tian, Kevin
2021-07-09 21:50 ` Alex Williamson
2021-07-12  1:22   ` Tian, Kevin
2021-07-12 18:41     ` Alex Williamson
2021-07-12 23:41       ` Tian, Kevin
2021-07-12 23:56       ` Tian, Kevin
2021-07-13 12:55         ` Jason Gunthorpe
2021-07-13 16:26           ` Alex Williamson
2021-07-13 16:32             ` Jason Gunthorpe
2021-07-13 22:48               ` Tian, Kevin
2021-07-13 23:02                 ` Jason Gunthorpe
2021-07-13 23:20                   ` Tian, Kevin
2021-07-13 23:22                     ` Jason Gunthorpe
2021-07-13 23:24                       ` Tian, Kevin
2021-07-15  3:20 ` Shenming Lu
2021-07-15  3:55   ` Tian, Kevin
2021-07-15  6:29     ` Shenming Lu
2021-07-15  6:49       ` Tian, Kevin
2021-07-15  8:14         ` Shenming Lu
2021-07-15 12:48         ` Jason Gunthorpe
2021-07-15 13:57           ` Raj, Ashok
2021-07-15 15:23             ` Jason Gunthorpe
2021-07-15 16:21               ` Raj, Ashok
2021-07-15 17:18                 ` Jason Gunthorpe [this message]
2021-07-15 17:48                   ` Raj, Ashok
2021-07-15 17:53                     ` Jason Gunthorpe
2021-07-15 18:05                       ` Raj, Ashok
2021-07-15 18:13                         ` Jason Gunthorpe
2021-07-16  1:20                           ` Tian, Kevin
2021-07-16 12:20                             ` Shenming Lu
2021-07-21  2:13                               ` Tian, Kevin
2021-07-22 16:30                                 ` Jason Gunthorpe
2021-07-16 18:30                             ` Jason Gunthorpe
2021-07-21  2:11                               ` Tian, Kevin
2021-07-26  4:50 ` David Gibson
2021-07-28  4:04   ` Tian, Kevin
2021-08-03  1:50     ` David Gibson
2021-08-03  3:19       ` Tian, Kevin
2021-08-06  4:45         ` David Gibson
2021-08-06 12:32           ` Jason Gunthorpe
2021-08-10  6:10             ` David Gibson
2021-08-09  8:34           ` Tian, Kevin
2021-08-10  4:47             ` David Gibson
2021-08-10  6:04               ` Tian, Kevin
2021-07-30 14:51   ` Jason Gunthorpe
2021-08-02  2:49     ` Tian, Kevin
2021-08-04 14:04       ` Jason Gunthorpe
2021-08-04 22:59         ` Tian, Kevin
2021-08-05 11:27           ` Jason Gunthorpe
2021-08-05 22:44             ` Tian, Kevin
2021-08-06  4:47         ` David Gibson
2021-08-03  1:58     ` David Gibson
2021-08-04 14:07       ` Jason Gunthorpe
2021-08-06  4:24         ` David Gibson
2021-07-26  8:14 ` Jean-Philippe Brucker
2021-07-28  4:05   ` Tian, Kevin
2021-08-04 15:59 ` Eric Auger
2021-08-05  0:36   ` Tian, Kevin
2021-08-10  7:17     ` Eric Auger
2021-08-10  9:00       ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210715171826.GG543781@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=corbet@lwn.net \
    --cc=dave.jiang@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dwmw2@infradead.org \
    --cc=eric.auger@redhat.com \
    --cc=hao.wu@intel.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jasowang@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkml@metux.net \
    --cc=lushenming@huawei.com \
    --cc=parav@mellanox.com \
    --cc=pbonzini@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=wanghaibin.wang@huawei.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).